The term “artificial intelligence” was beaten to semantic death in 2016. The term has been used and abused before, but perhaps never like it was during a year of self-driving cars and home assistants, anyone and everyone is trying to associate the “AI” acronym with their startup like it were the 1991 marketing blitz for Terminator 2.
From that perspective, one might be quick to forgive anyone who dismissed the latest acquisition by Microsoft of Maluuba.
Based in Montreal, University of Waterloo graduates Sam Pasupalak and Kaheer Suleman founded the company in 2010. Maluuba is currently focused on applying deep learning and “reinforcement learning” to language comprehension by machines. To that end, they have published several research papers though, haven’t publicly released a product.
But more importantly, they call themselves an “artificial general intelligence” company.
That sounds like the most boring kind of AI you might imagine when you compare that to hot terms like “neural machine learning” or “generative adversarial networks.”
But to call yourself a ‘general AI’ startup means you cover the gamut.
“So far, our team has focused on the areas of machine reading comprehension, dialogue understanding, and general (human) intelligence capabilities such as memory, common-sense reasoning, and information seeking behavior,” Maluuba’s announcement read, meaning they are trying to program machines to have a certain instinct, or programmed inclination to seek information.
“Ever since we were classmates in our AI course (CS 486) at the University of Waterloo, way back in the summer of 2010, our vision has been to solve artificial general intelligence by creating literate machines that could think, reason and communicate like humans.”
literate machines that can think, reason and communicate like humans.” He went on to illustrate, asking readers to imagine a secretary with an intuitive knowledge of your company’s inner workings and deals.
“The agent would be able to answer your question in a company security-compliant manner by having a deeper understanding of the contents of your organization’s documents and emails, instead of simply retrieving a document by keyword matching, which happens today.”
What is Artificial General Intelligence (AGI)?
But being a “literate machine” means a lot more than just quickly evaluating a target text or image and making some sense of it.
The non-profit Artificial General Intelligence Society organization calls it an emerging field “aiming at the building of ‘thinking machines’; that is, general-purpose systems with intelligence comparable to that of the human mind,” or perhaps beyond human reasoning’s capabilities.
The organization introduces its purpose by adding the term “AGI” has become necessary as “AI” has become associated with more limited goals as businesses and startups capitalize on individual industries: search, data analysis, scheduling, even comparing contracts to regulations to ensure compliance in the finance and insurance industries.
The broad idea of AGI is to flawlessly duplicate human intelligence, though a standard has not been so precisely worded. Thus far, the goal of an AGI-capable technology, has only been vaguely defined.
Machine Intelligence Research Institute (MIRI) Executive Director Luke Muehlhauser lists a number of requirements different people have demanded of such a technology: hold a 30-minute conversation, interpret audiovisual information, figure out how to make a cup of coffee, successfully enroll into and graduate from a university, and at least show the potential to completely automate important jobs.
Can you program an instinct to explore?
There are other teams who consider themselves AGI companies. Finland’s The Curious AI Company has raised nearly $1 million in seed funding for its unsupervised artificial intelligence work. They like to think of their goal technology as something suitable for a carbon-based life-form, but “adapted to silicon.”
Looking at it like that, you could say they want a human built out of machine parts, but that still missed on one critical area. In a word, they are working on a so-far unreachable trait for machines: curiosity.
AGI algorithms “must be intrinsically motivated to become progressively better at utilizing resources. This drive then naturally leads to effectiveness, efficiency, and curiosity,” Bas R. Steunbrink of the Swiss AI Lab IDSIA wrote in a his pretty interesting (though heavy read of a paper) “Resource-Bounded Machines are Motivated to be Effective, Efficient, and Curious.”
He describes work with an AGI architecture called AERA, but to make a long story short it is missing the critical proclivity to be inquisitive.
“We built the AERA as a cognitive architecture towards AGI, based on many firm principles. However, curiosity was not one of them. Now it turns out that AERA is missing an exploration heuristic…”
Curiosity killed the cat, but it paves the way for the next cat to learn from the mistakes of the first. If machines could be made curious, they could be turned on, pointed in any direction and start extracting information from the ether and have data at the ready.
Assume a scanner is brought into a biology lab and made to record the sights (by camera), sounds (by speaker), number of researchers, time of day, and the like. With enough input, such a scanner might have automatically accumulated data from its immediate surroundings and use unsupervised learning to extract insights based on the murmuring of doctorate students in the lab.
The scenario I just described is entirely fictional, made up just for the purposes of this article. Yet, anyone working on neuro-dynamic programming is aware of the advantages an information-seeking computer would have versus one dependent on information input.
“One way to think of this aspect of experimentation in neuro-dynamic programming is as an analytic implementation of ‘curiosity.’ This inquisitive algorithm likes to test new actions with a component of randomness, see how the world responds, and adjust its concept of the world accordingly,” describes Scott Zoldi, VP of Analytic Science at FICO.
“It is actively collecting data we can think of as extra-informative. In other words, like humans, analytics can learn more from deliberate experimentation than from just passively observing the world and can do so in controlled and sensible ways.”
Putting other “AI” startups in their place
Can Maluuba set the stage for Microsoft to introduce the first exploratory, curious AI to hit the big time? Time will tell, but Microsoft is betting big they will.
Maluuba is content for now focus on the linguistic elements of machine learning, a hot segment of the field.
As they say in their post, “Understanding human language is an extremely complex task and, ultimately, the holy grail in the field of AI. In early 2014, we observed great leaps in the fields of computer vision and speech recognition and pondered the potential of Deep Learning and Reinforcement Learning to enable our mission of creating literate machines.”
While speaking about the big buzz topics of 2017 like deep learning, computer vision, and NLP, Maluuba really means it when they talk AI. Startup marketing is littered with AI and machine learning buzz, but the products many companies offer are limited in scope. Maluuba is one those companies that is working on a much more ambitious suite of algorithms for the future that would underpin something much closer to a science-fictional version of AI.
Thinking about that big picture, they will likely be planning their architecture to evolve with new advances, knowing they want their machines to be ‘self-motivated’ in the not-too-distant future.