Common sense in humans? Certainly not a guarantee. Case in point: the recent bank robber who scribbled his demand letter on a piece of paper that had his name and address on the other side.

Common sense in AI? It should be a given, but it’s been very challenging for AI to capture human-style common sense knowledge in a form that algorithms can interpret and apply.

Common sense is imperative for AI since it’s embedded in so many processes these days, many of them involving critical decision-making such as driving autonomous cars, medical diagnoses and other life-or-death conclusions from intelligence information. A lack of common sense can also thwart new developments in AI and has sometimes been the obstacle between narrow applications of AI and broader ones.

Recently, several key players have set their sights on common sense, including the U.S. military, Facebook,, and Microsoft co-founder Paul Allen. While the end goal is the same, their methods are different. Some are focused on building predictive models, similar to how children learn language. Others are dedicated to building vast reservoirs of knowledge. The military and Salesforce are going with the first approach.

The military has enlisted its research arm, DARPA, to work on this effort. DARPA’s Machine Common Sense program will run a competition that asks AI algorithms to make sense of certain questions. These benchmarks will focus on language because it can so easily trip up machines, and because it makes testing relatively straightforward.

Salesforce has created its own model based on the way children learn language, where children use context to try and predict the next word. And Salesforce data scientists started with human participants, asking them to explain which of several answers was “most appropriate.”

Annotators highlighted relevant words in questions that justified the answers, and then provided brief, open-ended explanations based on the highlighted justifications that served as the reasoning behind the questions.

The scientists used Google’s BERT, which is bidirectional (allowing it to access context from past and future directions) and unsupervised (meaning it can ingest data that’s neither classified nor labeled).

There is also an effort underway to use larger language models, which are expected to boost accuracy even more. The Allen Institute for Artificial Intelligence, or AI2, created by Paul Allen, is choosing the alternate approach by creating a vast reservoir of human knowledge that it expects will someday contain common sense.

It has $125 million to spend on this effort. Its researchers are expected to use a combination of crowdsourcing, machine learning and machine vision to create this reservoir of knowledge.

First « 1 2 » Next