Avatar

Markus Dreyer

Machine Learning Scientist

Amazon.com (Alexa)

Biography

Markus Dreyer is a machine learning scientist at Amazon. He has been part of the Alexa group since 2015, leading projects in natural language understanding and question answering. He has published on transfer learning, semi-supervised learning, graphical models, semantic parsing, neural lattice parsing, and entity linking.

Interests

  • Natural Language Processing
  • Artificial Intelligence/Deep Learning
  • Machine Learning

Education

  • PhD in Computer Science

    Johns Hopkins University

  • M.Sc. in Computer Science

    Johns Hopkins University

  • M.A. in Computational Linguistics, Linguistics, Philosophy

    Heidelberg University

Experience

 
 
 
 
 

Machine Learning Scientist

Amazon.com

March 2015 – Present Seattle
Question answering, summarization, and natural language understanding (NLU).
 
 
 
 
 

Research Scientist

SDL (Language Weaver)

January 2011 – February 2015 Los Angeles, California
Algorithms and statistical models for large-scale machine translation (MT).
 
 
 
 
 

Research Assistant

CLSP, Johns Hopkins

January 2004 – December 2010 Baltimore, Maryland
Semi-supervised learning of nonconcatenative morphology, based on graphical models, the Dirichlet process and log-linear parameterization of finite-state machines.
 
 
 
 
 

Research Scientist

IBM, Speech Group

May 1999 – February 2003 Heidelberg, Germany
Text-to-speech (TTS) technology and parsing methods.

Recent Publications

Quickly discover relevant content by filtering publications.

Analyzing the Abstractiveness-Factuality Tradeoff With Nonlinear Abstractiveness Constraints

We analyze the tradeoff between factuality and abstractiveness of summaries. We introduce abstractiveness constraints to control the degree of abstractiveness at decoding time, and we apply this technique to characterize the abstractiveness-factuality tradeoff across multiple widely-studied datasets, using extensive human evaluations. We train a neural summarization model on each dataset and visualize the rates of change in factuality as we gradually increase abstractiveness using our abstractiveness constraints. We observe that, while factuality generally drops with increased abstractiveness, different datasets lead to different rates of factuality decay. We propose new measures to quantify the tradeoff between factuality and abstractiveness, incl. muQAGS, which balances factuality with abstractiveness. We also quantify this tradeoff in previous works, aiming to establish baselines for the abstractiveness-factuality tradeoff that future publications can compare against.

Efficiently Summarizing Text and Graph Encodings of Multi-Document Clusters

This paper presents an efficient graph-enhanced approach to multi-document summarization (MDS) with an encoder-decoder Transformer model. This model is based on recent advances in pre-training both encoder and decoder on very large text data (Lewis et al., 2019), and it incorporates an efficient encoding mechanism (Beltagy et al., 2020) that avoids the quadratic memory growth typical for traditional Transformers. We show that this powerful combination not only scales to large input documents commonly found when summarizing news clusters; it also enables us to process additional input in the form of auxiliary graph representations, which we derive from the multi-document clusters. We present a mechanism to incorporate such graph information into the encoder-decoder model that was pre-trained on text only. Our approach leads to significant improvements on the Multi-News dataset, overall leading to an average 1.8 ROUGE score improvement over previous work (Li et al., 2020). We also show improvements in a transfer-only setup on the DUC-2004 dataset. The graph encodings lead to summaries that are more abstractive. Human evaluation shows that they are also more informative and factually more consistent with their input documents.

Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization

We consider the problem of topic-focused abstractive summarization, where the goal is to generate an abstractive summary focused on a particular topic, a phrase of one or multiple words. We hypothesize that the task of generating topic-focused summaries can be improved by showing the model what it must not focus on. We introduce a deep reinforcement learning approach to topic-focused abstractive summarization, trained on rewards with a novel negative example baseline. We define the input in this problem as the source text preceded by the topic. We adapt the CNN-Daily Mail and New York Times summarization datasets for this task. We then show through experiments on existing rewards that the use of a negative example baseline can outperform the use of a self-critical baseline, in Rouge, BERTScore, and human evaluation metrics.

Transductive Learning for Abstractive News Summarization

Pre-trained language models have recently advanced abstractive summarization. These models are further fine-tuned on human-written references before summary generation in test time. In this work, we propose the first application of transductive learning to summarization. In this paradigm, a model can learn from the test set’s input before inference. To perform transduction, we propose to utilize input document summarizing sentences to construct references for learning in test time. These sentences are often compressed and fused to form abstractive summaries and provide omitted details and additional context to the reader. We show that our approach yields state-of-the-art results on CNN/DM and NYT datasets. For instance, we achieve over 1 ROUGE-L point improvement on CNN/DM. Further, we show the benefits of transduction from older to more recent news. Finally, through human and automatic evaluation, we show that our summaries become more abstractive and coherent.

Multi-Task Networks with Universe, Group, and Task Feature Learning

We present methods for multi-task learning that take advantage of natural groupings of related tasks. Task groups may be defined along known properties of the tasks, such as task domain or language. Such task groups represent supervised information at the inter-task level and can be encoded into the model. We investigate two variants of neural network architectures that accomplish this, learning different feature spaces at the levels of individual tasks, task groups, as well as the universe of all tasks: (1) parallel architectures encode each input simultaneously into feature spaces at different levels; (2) serial architectures encode each input successively into feature spaces at different levels in the task hierarchy. We demonstrate the methods on natural language understanding (NLU) tasks, where a grouping of tasks into different task domains leads to improved performance on ATIS, Snips, and a large in-house dataset.