All members of the network are actively working on interpretability, but they have come to this topic from very different domains. The network brings together crucial expertise on methodology, lexical semantics, semantic and syntactic parsing, machine translation, computational phonology, music recommendation, language acquisition and more. This will allow us to give different perspectives towards what interpretation of deep learning means in different scenerios and for different goals.

Willem Zuidema is associate professor of computational linguistics and cognitive science at ILLC (UvA), with a long term interest in the neural basis of language. Because of that cognitive interest, was early contributor to deep learning in NLP, with work on neural parsing published as early as 2008 (Borensztajn & Zuidema, 2008, CogSci), and pioneering contributions on tree-shaped neural networks, including the TreeLSTM (Le & Zuidema (2015) , *SEM; published concurrently with groups from Stanford and Montreal). In 2016 he and his students introduced Diagnostic Classification (Veldhoen, Hupkes, & Zuidema, 2016; Hupkes, Veldhoen, & Zuidema, 2018; Giulianelli, Harding, Mohnert, Hupkes, & Zuidema, 2018) , one of the key interpretability techniques. He further performed research on the integration of formal logic and deep learning (Veldhoen & Zuidema, 2017; Repplinger, Beinborn, & Zuidema, 2018; Mul & Zuidema, 2019) . Other directly relevant work focuses on other interpretability techniques including Representational Similarity Analysis (Abnar, Beinborn, Choenni, & Zuidema, 2019) and contextual decomposition (Jumelet et al., 2019).

Afra Alishahi is an Associate Professor of Cognitive Science and Artificial Intelligence at Tilburg University. Her main research interests are developing computational models for studying the process of human language acquisition, studying the emergence of linguistic structure in grounded models of language learning, and developing tools and techniques for analyzing linguistic representations in neural models of language. She has received a number of research grants including an NWO Aspasia, an NWO Natural Artificial Intelligence and an e-Science Center/NWO grant. She is the co-organizer of the BlackboxNLP 2018 workshop, the first official venue dedicated to analyzing and interpreting neural networks for NLP. She has a number of well-received publications on the topic of interpretability of neural network models of language, including the recipient of the best paper award at the Conference on Computational Language Learning (CoNLL) in 2017.

Grzegorz Chrupała is an assistant professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. His research focuses on computational models of language learning from multimodal signals such as speech and vision and on the analysis and interpretability of representations emerging in deep neural networks. He has served as area chair for ACL, EMNLP and CoNLL, and was general chair for Benelearn 2018. He co-organized the 2018 and 2019 editions of BlackboxNLP, the Workshop on Analyzing and Interpreting Neural Networks for NLP. Together with Afra Alishahi and students, he did some of the pioneering research on analyzing deep learning methods for visually grounded language (Kádár, Chrupała, & Alishahi, 2017) as well as for speech (Alishahi, Barking, & Chrupała, 2017) . In their most recent work in the area of analysis and interpretation Chrupała and Alishahi (2019) introduced methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which directly quantify how strongly information encoded in neural activation patterns corresponds to information represented by symbolic structures.

Arianna Bisazza is assistant professor at the Center for Language and Cognition (CLCG) of the University of Groningen, fully funded by a VENI grant since 2016 Her research aims at identifying intrinsic limitations of current language modeling paradigms, and to design robust NLP algorithms that can adapt to a diverse range of linguistic phenomena observed among the world’s languages. She has a long track record of contributions to machine translation for challenging language pairs (Bisazza & Federico, 2012; Tran, Bisazza, & Monz, 2014; Fadaee, Bisazza, & Monz, 2017) . Together with colleagues at the University of Amsterdam, she proposed the Recurrent Memory Network, one of the very first modifications to deep-learning based language models aimed at improving interpretability (Tran, Bisazza, & Monz, 2016). Other recent contributions to the interpretability of NLP models include analyses of MT outputs (Bentivogli, Bisazza, Cettolo, & Federico, 2018) and probing tasks for recurrent language models (Tran, Bisazza, & Monz, 2018; Bisazza & Tump, 2018).

Dieuwke Hupkes is a PhD student at the Institute for Logic, Language and Computation, working together with Willem Zuidema. In her research, she focuses on understanding how recurrent neural networks can understand and learn the types of hierarchical structures that occur in natural language, for her a problem that touches on the core of understanding natural language. Although artificial neural networks are of course nothing like the real brain, she hopes that understanding the principles by which they can encode processes can still teach us something that will lead to a better understanding of language!

Tom Lentz is an assistant professor in computational phonology and cognitive science at the ILLC of the UvA. He works on the detection of prosodic structure in speech, including automatic classification of pitch contours as gathered in controlled experiments. He has recently obtained an interdisciplinary research grant for a project on the detection of irony in spoken speech (funding for one PhD student). Relevant other experience is an investigation on the individual variation in the use of prosody to mark focus (Lentz & Chen, 2015).

Louis ten Bosch (RU, Nijmegen) has expertise in automatic speech recognition, computational modelling of cognitive processes, speech decoding techniques using phonological features, and structure discovery methods. He is one of the coorganizers of the successful DNN interpretation session “what we learn from DNNs” that took place in 2018 at the language and speech technology conference Interspeech in Hyderabad, India. One of the recent advances in understanding artificial networks is achieved by relating the mathematical layer-to-layer transformations in a network to the more structural description of datasets as shown by linear mixed effect models and by Generalized Additive Models. More recently, in collaboration with Mirjam Ernestus, he is involved in computational models of human spoken word comprehension, a number of abstract-versus-exemplar studies in psycholinguistics, and (with Ton Dijkstra) in computational modelling of online sentence processing of idiomatic expressions.

Iris Hendrickx (RU, Nijmegen) is a researcher in computational linguistics and digital humanities with a focus on the areas of machine learning, lexical and relational semantics, natural language processing, techniques for document understanding and text mining. She provides expertise to the network on creating text data enriched with human annotation for training such models, and on applying and evaluating these models and augmenting them with domain expert knowledge.

Antske Fokkens is an assistant professor in computational linguistics at the Vrije Universiteit. Her main expertise lie in methodological questions in computational linguistics and, in particular, the importance of understanding the implications of chosen technologies, training data and features when applying computational language models in interdisciplinary contexts. In her research she has (among others) pointed out fundamental problems with reproducibility (Fokkens et al., 2013) as well as the need for deeper analysis of the accuracy of our tools (Le & Fokkens, 2017; Fokkens et al., 2017). She collaborates extensively with researchers in the humanities and social sciences, as can be seen in multiple joint publications, grants and events and is a member of the Computational Communication Science lab Amsterdam. She is a recognized international expert and has obtained multiple research grants, including a VENI grant in 2015 and co-applicantship of an NWO Vrije Competitie grant, as well as project funding from societal partners.

John Ashley Burgoyne is the Lecturer in Computational Musicology at the University of Amsterdam and researcher in the Music Cognition Group at the Institute for Logic, Language, and Computation. Cross-appointed in Musicology and Artificial Intelligence, he is interested in understanding musical behaviour at the audio level, using large-scale experiments and audio corpora. His McGill–Billboard corpus of time-aligned chord and structure transcriptions has served as a backbone for audio chord estimation techniques. His Hooked on Music project reached hundreds of thousands of participants in almost every country on Earth while collecting data to understand long-term musical memory.

References

Abnar, S., Beinborn, L., Choenni, R., & Zuidema, W. (2019). Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains.

Alishahi, A., Barking, M., & Chrupała, G. (2017). Encoding of phonology in a recurrent neural model of grounded speech. In R. Levy & L. Specia (Eds.), Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) (pp. 368–378). Association for Computational Linguistics.

Bentivogli, L., Bisazza, A., Cettolo, M., & Federico, M. (2018). Neural versus phrase-based MT quality: An in-depth analysis on English–German and English–French. Computer Speech & Language, 49, 52–70.

Bisazza, A., & Federico, M. (2012). Cutting the Long Tail: Hybrid Language Models for Translation Style Adaptation. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 439–448). Avignon, France: Association for Computational Linguistics.

Bisazza, A., & Tump, C. (2018). The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 2871–2876). Brussels, Belgium: Association for Computational Linguistics.

Chrupała, G., & Alishahi, A. (2019). Correlating neural and symbolic representations of language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.

Fadaee, M., Bisazza, A., & Monz, C. (2017). Data Augmentation for Low-Resource Neural Machine Translation. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 567–573.

Fokkens, A., ter Braake, S., Ockeloen, N., Vossen, P., Legêne, S., Schreiber, G., & de Boer, V. (2017). BiographyNet: Extracting Relations Between People and Events. In Á. Z. Bernád, C. Gruber, & M. Kaiser (Eds.), Europa baut auf Biographien: Aspekte, Bausteine, Normen und Standards für eine europäische Biographik (1st ed., pp. 193–224). Vienna: New Academic Press.

Fokkens, A., van Erp, M., Postma, M., Pedersen, T., Vossen, P., & Freire, N. (2013). Offspring from Reproduction Problems: What Replication Failure Teaches Us. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1691–1701). Sofia, Bulgaria: Association for Computational Linguistics.

Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D., & Zuidema, W. (2018). Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information. In Proceedings EMNLP workshop Analyzing and interpreting neural networks for NLP (BlackboxNLP).

Hupkes, D., Veldhoen, S., & Zuidema, W. (2018). Visualisation and ‘Diagnostic Classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61, 907–926.

Kádár, Á., Chrupała, G., & Alishahi, A. (2017). Representation of Linguistic Form and Function in Recurrent Neural Networks. Computational Linguistics, 43, 761–780.

Le, M., & Fokkens, A. (2017). Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing. ArXiv:1702.06794 [Cs].

Le, P., & Zuidema, W. (2015). Compositional Distributional Semantics with Long Short Term Memory. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics (pp. 10–19). Denver, Colorado: Association for Computational Linguistics.

Lentz, T. O., & Chen, A. (2015). Unbalanced adult production and perception in prosody. In Proceedings of the 18th International Congress of Phonetic Sciences. University of Glasgow, Glasgow.

Mul, M., & Zuidema, W. (2019). Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization.

Repplinger, M., Beinborn, L., & Zuidema, W. (2018). Vector-space models of words and sentences. Nieuw Archief Voor De Wiskunde.

Tran, K., Bisazza, A., & Monz, C. (2016). Recurrent Memory Networks for Language Modeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 321–331).

Tran, K., Bisazza, A., & Monz, C. (2018). The Importance of Being Recurrent for Modeling Hierarchical Structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 4731–4736).

Tran, K. M., Bisazza, A., & Monz, C. (2014). Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1676–1688). Association for Computational Linguistics.

Veldhoen, S., Hupkes, D., & Zuidema, W. (2016). Diagnostic classifiers: revealing how neural networks process hierarchical structure. In Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (at NIPS).

Veldhoen, S., & Zuidema, W. (2017). Can Neural Networks learn Logical Reasoning? In Proceedings of the Conference on Logic and Machine Learning in Natural Language (LaML) (pp. pp. 35–41). University of Gothenburgh, Sweden.