InDeep: Interpreting Deep Learning Models for Text and Sound
Consortium:
Project website
Principal investigators
- Willem Zuidema (ILLC, University of Amsterdam)
- Afra Alishahi (Tilburg University)
- Grzegorz Chrupała (Tilburg University)
- Arianna Bisazza (University of Groningen)
- Tom Lentz (ILLC, University of Amsterdam)
- Louis ten Bosch (Radboud University, Nijmegen)
- Iris Hendrickx (Radboud University, Nijmegen)
- Antske Fokkens (Free University, Amsterdam)
- Ashley Burgoyne (ILLC, University of Amsterdam)
Cofunding and cooperation partners
- KPN
- Textkernel
- Deloitte
- AIgent
- Chordify
- Global textware
- TNO
- Floodtags
- Waag
- muZIEum
Funding:
2 million euro from the National Research Agenda programme (NWA-ORC 2019) of the Netherlands Organization for Scientific Research (NWO) + in-kind contributions from the cofunding partners. The project will run from mid 2021 until mid 2026.Description:
In this project the InterpretingDL network brings together pioneering researchers in the domain of interpretability of deep learning models of text, language, speech and music. They collaborate with companies and non-for-profit institutions working with language, speech and music technology, to develop applications that help assess the usefulness of alternative interpretability techniques on a range of different tasks. In “justification” tasks, we look at how interpretability techniques help give users meaningful feedback. Examples include legal and medical document text mining and audio search. In “augmentation” tasks we look at how these techniques facilitate the use of domain knowledge and models from outside deep learning to make the models perform even better. Examples include machine translation, music recommendation and writing feedback. In “interaction” tasks we allow users to influence the functioning of their automated systems, by providing both interpretable information on how the system operates, and letting human produced output find its way into the internal states of the learning algorithm. Examples include adapting speech recognition to non-standard accents and dialects, interactive music generation, and machine assisted translation.
Activities:
- Fundamental research on interpretability methods in NLP, speech and music processing
- Applied research on interpretability, in tight collaboration with the partners
- A public outreach program, involving citizen science projects, lectures, concerts, debates, demos and nights in the museum
- An industrial outreach program, involving master classes on deep learning and interpretability in NLP, speech and music processing
- Software packages and online demos