About Me
Welcome to my webpage!
I am a research scientist at Element AI that has just been acquired by ServiceNow. I am also a Core Industry Member of Mila and Adjunct Professor at McGill University.
I believe Human Language Techonologies (HLT: a better name than NLP) will change the way humans interact with software and access knowledge. In fact, this has already happened (think web search), but this is just the beginning. I am interested in research questions at all levels of the HLT techonology stack including fundamentals of deep learning, foundation model training, task-specific algorithms (especially semantic parsing), user experience with AI systems. Keyword-wise, my recent and on-going work focuses on semantic parsing and task-oriented dialogue methods, code generation, systematic (compositional) generalization and sample efficiency of neural models.
My prior research interests include grounding language in vision and action, question answering, speech recognition, machine translation and structured prediction in general.
I have did my PhD at Mila working under supervision of Yoshua Bengio.
A bit of bragging: I invented the content-based neural attention that is now a core tool in deep-learning-based natural language processing.
Research Highlights
-
Edge Transformer: a new neural architecture inspired by Prolog and Transformers Systematic Generalization with Edge Transformers
L. Bergen, T. J. O’Donnell, D. Bahdanau
EMNLP 2021 -
semantic parsing by labeling edges and nodes of an aligned graph
LAGr: Labeling Aligned Graphs for Improving Systematic Generalization in Semantic Parsing
D. Jambor, D. Bahdanau -
state-of-the-art text-to-SQL with constrained inference
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
T. Scholak, N. Schucher, D. Bahdanau
EMNLP 2021
Previous Work
-
a simpler and faster text-to-SQL model
DuoRAT: Towards Simpler Text-to-SQL Models
T. Scholak, R. Li, D. Bahdanau, H. de Vries, C. Pal
NAACL 2020 -
a position paper on ecological validity of existing datasets and benchmarks for language user interfaces
Towards Ecologically Valid Research on Language User Interfaces
H. de Vries, D. Bahdanau, C. Manning -
a new systematic generalization test for visual question answering
CLOSURE: Assessing Systematic Generalization of CLEVR Models
D. Bahdanau, H. de Vries, T. J. O’Donnell, S. Murty, P. Beaudoin, Y. Bengio, A. Courville
ArXiV -
a study on sytematic generalization in modular visual question answering architectures
Systematic Generalization: What Is Required and Can It Be Learned?
D. Bahdanau, S. Murty, M. Noukhovitch, T. H. Nguyen, H. de Vries, A. Courville
ICLR 2019 -
new platform to study sample efficiency of different instruction-following approaches
BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop
M. Chevalier-Boisvert, D. Bahdanau, S. Lahlou, L. Willems, C. Saharia, T.H. Nguyen, Y. Bengio
ICLR 2019 -
instruction-following with goal-states instead of complete demonstrations
Learning to Understand Goal Specifications by Modelling Reward
D. Bahdanau, F. Hill, J. Leike, E. Hughes, P. Kohli, E. Grefenstette
ICLR 2019 -
training criteria for sequence prediction tasks
An Actor-Critic Algorithm for Sequence Prediction
D. Bahdanau, P. Brakel, K. Xu, A. Goyal, R. Lowe, J. Pineau, A. Courville, Y. Bengio
ICLR 2017 -
adapting recurrent networks with attention to do speech recognition
End-to-End Attention-based Large Vocabulary Speech Recognition
D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, Y. Bengio
ICASSP 2016, oralAttention-Based Methods for Speech Recognition
J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, Y. Bengio
NIPS 2015, spotlight - deep learning software on top of Theano
Blocks and Fuel: frameworks for deep learning
B. Merriënboer, D, Bahdanau, V. Dumoulin, D. Warde-Farley, J. Chorowski, Y. Bengio
2015, technical report - neural machine translation
Neural Machine Translation by Jointly Learning to Align and Translate
D. Bahdanau, K. Cho, Y. Bengio
ICLR 2015, oral