Advancing machine comprehension with question generation

 

Maluuba’s vision is to create machines that can comprehend, reason and communicate with humans.

We see a future where a humans interact with machines just as they would with another human. We could ask a question in natural language and have the machine respond with an appropriate answer.

Yet answering questions is only one part of an interaction. In addition to our work in training machines to seek information and then read and reason upon text and answer questions, we are now training machines to ask questions. 

The importance of questions

While asking a question may seem straightforward, it is the process of asking the right question that can drive better understanding of concepts and information. While many QA datasets are geared to training for answering questions – an extractive task – the process of asking questions is comparatively abstractive: it requires the generation of text that may not appear in the context document. Asking ‘good’ questions involves skills beyond those needed to answer them.

 
Examples of conditional question generation given a context and an answer

Examples of conditional question generation given a context and an answer

 

We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning to improve its performance. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system.

To our knowledge, this is the first end-to-end, text-to-text model for question generation.

Our research paper outlines the development of the model, the training used, the results as well as implications and next steps.

 
 
 

 
 
ResearchPaul Gray