Maluuba’s AI & Deep Learning predictions for 2017


Maluuba’s vision is to help us advance towards the goal of Artificial General Intelligence by creating literate machines that can think, reason and communicate like humans.
To help realise this vision, our growing team of research scientists and engineers are working to address some of the most challenging problems in language understanding. This team seeks to build machines that model innate human capabilities such as common sense, curiosity, creative thinking and decision-making. We are driving breakthrough innovations in machine comprehension and communication capabilities.
In this post, five members of Maluuba’s research team share their perspectives on the trends, initiatives and applications of AI that they think will be most transformative in 2017 and beyond.

Adam Trischler, Senior Research Scientist

What major advances do you foresee in artificial intelligence in 2017?

Despite the impressive progress that machine learning made in 2016, AI systems remain specialists: they cannot add new skills to their repertoires without erasing what they already know. This is the problem of catastrophic forgetting. An AI trained to recognise faces in photographs, for example, would not apply well to another visual task, such as recognising street signs. Each system would need to be trained for its own limited task.

Older work on, e.g., complementary learning systems and dual neural networks sought to remedy this problem, but only worked at small scale. More recently, researchers have tackled catastrophic forgetting in deep networks and, with the benefit of new architectures and new techniques, have made significant early gains. This work will continue into 2017 and lay the groundwork for AI systems that learn “online” and incrementally, improving over their lifetimes, adding new abilities, and learning to compose existing skills into more complex operations.

What unexamined aspects of language understanding will come to the forefront in 2017?

One of the old debates of AI is connectionism versus symbolism (distributed, fuzzy statistical representations versus unified symbols that interact through hard rules). With the rise of deep learning, the connectionist paradigm has taken over (and rightfully so --- it works). However, I believe symbolism holds utility for the performance of various higher-level tasks. In 2017 I’d like to see work on marrying the two approaches, for instance in developing deep networks that learn to use the hard and fast rules of logic for reasoning and inference. Part of my interest in language is that it seems to exist at an intersection: words are symbols that we combine according to the hard rules of grammar, yet their use exhibits statistical nuance and flexibility, and distributed word representations have proved incredibly fruitful.


Harm van Seijen, Research Scientist

What major advances do you foresee in reinforcement learning in 2017?

Reinforcement learning, combined with deep learning, has shown to be very effective in solving single tasks, like learning to play Atari games, but there is little cross-task learning. In 2017, major steps will be taken towards learning general skills that can be re-used across many different tasks. As people and companies are moving towards conversational agents, RL will play an increasingly important role in helping develop more capable agents that can handle a range of complex questions from users.

What do you think about the concerns on AI safety?

With more and more AI products being rolled out, I expect even more discussions around AI safety (and with that, even more Hollywood movies and TV series about AI running amok!). I think such discussions are good, although we should be cautious of fear mongering. The question of AI safety is ultimately not a technological one, but a political one. We as a society have to rethink how we want to organize ourselves in a world where machines can do most jobs as well as or better than humans.


Layla El Asri, Research Scientist

What major advances do you foresee in conversational interfaces in 2017?

Conversational interfaces started becoming widespread with personal assistants back in 2011, most notably with the launch of Siri. One step further was made this year with the outburst of text-based conversational interfaces (or chatbots) and the development of platforms such as Skype and Facebook Messenger. One of the bottlenecks of research in conversational agents has been the lack of data and the necessity to resort to simulations to train our models. The rise of new language understanding  platforms will help collect data with real users and develop fundamental skills that can only be learnt in a real-world  setting. As a consequence, I believe that in 2017 we will see memory, information seeking and decision-making capabilities added to conversational agents. The launch of Frames, our Goal-Oriented Dialogue dataset will help support more complex conversational interactions.

Will we see new products or services powered by language understanding AI in 2017?

Computer vision has been considerably developed during the last few years. Recently, large datasets and efficient algorithms have been published that help research on having a conversation in a physical environment. An immediate application which would be a natural extension for customer service would be to couple language understanding with computer vision for troubleshooting. I believe that in general the next step for conversational agents will be to be able to gain information through vision in order to reduce the number of questions to ask to the user and thus be more efficient.


Philip Bachman, Senior Research Scientist

What are some promising directions for unsupervised learning?

Unsupervised learning is motivated as a way to help models "make sense" of the observed data (images, text, video...) without external labels or supervision. Previous work was largely focused just on the final outputs of such models. How the inner-workings of a model cooperate to produce an output has been somewhat overlooked. Recent developments let us make the internal behaviour of a model interpretable. For example, we can now encourage certain features to represent the type of an object in an image and others to represent its location, or encourage certain features to represent the sentiment of a movie review and others to represent its genre. These techniques move us closer to the goal of learning to represent and infer higher-level concepts which explain what a model sees. Enabling this ability for a wider range of models and tasks will significantly increase the practical value of unsupervised learning.

What are some core challenges in unsupervised learning for language understanding?

It would be great to have models that extract compact representations which reveal the semantic content of natural language text. One major difficulty when working with natural language is the variety of ways in which an idea can be expressed. Often, what we want to extract from some text is a representation of the intent which drove someone to produce that text and the ideas which the text is meant to convey. In many cases, the particular words and syntax used to express an idea contain a lot of information aside from that which we seek. Separating subtle information about intent and ideas from the superficial form in which they are rendered is a major challenge for current models. Unsupervised learning techniques which overcome this signal-to-noise problem by discovering factored representations of content and form would be a significant advance.


Alessandro Sordoni, Research Scientist

What do you think will be a major advancement in artificial intelligence in 2017?

A currently active field of research is the development of algorithms that support meta-learning. Meta-learning algorithms not only learn to perform well on specific tasks, but are designed to discover fundamental rules of learning itself, so that the same algorithm may use the inferred rules to solve other related, but previously unseen tasks. Of course, this is an open problem in artificial intelligence, and much research still needs to be done.

How would this be helpful for conversational agents?

For conversational agents to be truly effective, they should be equipped with capabilities of information-seeking. Imagine an agent that learns to ask and seek out information about the user’s preferences: this strategy will promote the personalization of the conversational experience and the subsequent increase of user engagement. In this sense, learning how to elicit information from a user by asking questions is a form of meta-learning — testing which questions are useful for one user gives hints about whether they’ll be useful for other users too. Of course, the agents should not constantly question the users, but should seek information parsimoniously.

ResearchPaul Gray