Maluuba’s research powers a new era of artificial intelligence
WHO WE ARE
Maluuba is a global leader in artificial intelligence research focused on teaching machines to think, reason and communicate.
Our vision is a world where intelligent machines work hand-in-hand with humans to advance the collective intelligence of the human species and bring about positive social and economic impacts.
We’re an early leader in using deep reinforcement learning to solve language-understanding problems and in training machines to model decision-making capabilities of the human brain.
Our goal is to teach machines to understand human literacy.
Our goal is to develop information-seeking agents that find, read, and then reason over natural language texts. Such an agent would have access to nearly all the human knowledge recorded in writing. We develop state-of-the-art deep learning algorithms to teach literacy to machines.
Maluuba’s work on Conversational User Interfaces focusses on building goal-driven dialogue bots that learn to engage in natural conversations with humans.
To achieve this, Maluuba is using novel techniques to optimize users’ satisfaction and agent’s knowledge acquisition simultaneously.
Reinforcement learning is about training a software agent how to behave in a desired way. In contrast to supervised learning, with reinforcement learning the agent does not require examples of correct or incorrect behaviour. Instead, it can improve its behaviour by itself by interacting with the environment and observing the rewards it gets for its actions. At Maluuba, we build dialogue systems using cutting-edge reinforcement learning techniques.
Our goal is to teach machines to understand human literacy.
At Maluuba, we are fascinated by the potential of artificial intelligence. We operate one of the world’s leading research facilities into natural language and deep learning. We take a multidisciplinary approach to our research and work closely with academia.
We are making strong progress by focusing on challenging problems, driving new techniques in reinforcement learning with a focus on machine comprehension and conversational interfaces.
Our research team publishes peer-reviewed papers that provide insight into the work we’re doing to advance knowledge in this field.
Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data
June 2016. We present a new approach to natural language generation using recurrent neural networks in an encoder-decoder framework. In contrast with previous work, our model uses both lexicalized and delexicalized versions of the slot-value pairs for each dialogue act…
Iterative Alternating Neural Attention for Machine Reading
June 2016. We propose an iterative neural attention model and apply it to machine comprehension tasks. Our architecture deploys a novel alternating attention mechanism, and tightly integrates successful ideas from past works in machine reading comprehension to obtain state-of-the-art results on three datasets…
Natural Language Comprehension with the EpiReader
June 2016. We present the EpiReader, a novel model for machine comprehension of text. The EpiReader is an end-to-end neural model comprising two components: the first proposes a small set of candidate answers and the second formulates hypotheses using the proposed candidates, then reranks the hypotheses based on their concordance with the supporting text…
A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems
June 2016. We introduce a data-driven user simulator based on an encoder-decoder recurrent neural network. The model takes as input a sequence of dialogue contexts and outputs a sequence of dialogue acts corresponding to user intentions. The dialogue contexts include information about the machine acts and the status of the user goal…
Policy Networks with Two-Stage Training for Dialogue Systems
June 2016. We use policy networks for dialogue systems and train them in a two-stage fashion: supervised training and batch reinforcement learning followed by online reinforcement learning. An important feature of policy networks is that they directly provide a probability distribution over the action space, which enables supervised training. The combination of supervised and reinforcement learning is the main benefit of our method, which paves the way for developing trainable end-to-end dialogue systems…
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
March 2016. We investigate machine comprehension on the challenging MCTest benchmark. Partly because of its limited size, prior work on MCTest has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy…
Mr. David L. Grannan, is a Co-founder of Light and serves as its Chief Executive Officer. Prior to Light, Dave was CEO of Vlingo, the first natural language speech recognition service for mobile phones. Vlingo provided speech recognition for the first Siri app and powered Samsung’s S-Voice product. Nuance Communications acquired Vlingo in 2012.
Yoshua Bengio is head of the Machine Learning Laboratory (MILA), CIFAR Program co-director of the CIFAR Neural Computation and Adaptive Perception program, Canada Research Chair in Statistical Learning Algorithms, and he also holds the NSERC-Ubisoft industrial chair. His main research ambition is to understand principles of learning that yield intelligence.
Richard S. Sutton is a fellow of the Association for the Advancement of Artificial Intelligence and co-author of the textbook Reinforcement Learning: An Introduction from MIT Press. Rich’s research interests centre on the learning problems facing a decision-maker interacting with its environment, which he sees as central to artificial intelligence. He is also interested in animal learning psychology, in connectionist networks, and generally in systems that continually improve their representations and models of the world.