Teaching Computers to Think Like Humans
January 17, 2014
by Suzanne Bouffard
Computers are making our lives more efficient and productive than ever, but we still want them to be faster, smarter, more powerful. The key, says cognitive psychologist Michael Jones, is right under our noses, or actually above them: teach computers to think more like the human brain.
“The brain has been optimized by evolution to handle massive information processing problems,” explains Jones, an associate professor at Indiana University at Bloomington. In contrast, he says “even the world’s most sophisticated supercomputers still can’t solve some problems that my 3-year old daughter naturally gets.” That’s because most computer systems are designed to solve a single task or set of tasks, and they aren’t very good at transferring knowledge to other problems.
Jones wants to change that. He specializes in cognitive modeling – learning how the human brain works – and in applying that knowledge to make computers smarter. Cognitive modeling is different from artificial intelligence (AI), which starts with technology and engineering rather than human intelligence. AI has generated advancements but also limitations, according to Jones. He highlights the Watson computer that was designed to beat “Jeopardy!” champion Brad Rutter. Watson’s knowledgebase consisted of 200 million pages of text and took up four terabytes of disk space, but it couldn’t do any of the other things Rutter can do, like read a book, drive a car, or draw a picture – plus, Rutter did pretty well considering that Watson had far more memory and processing speed, Jones points out.
Jones is convinced that a lot of the information needed to make computers better “is actually present in the real world.” And that information is more accessible than ever before, thanks to the rise of Big Data – the massive amounts of information available today about health, education, industry, and other topics. Jones and his colleagues utilize that data to create computer models that explain phenomena the way that mathematical equations do. For example, Jones says, he can use large amounts of data about young children’s speech patterns to create a model that explains how language develops over time. He can test and refine the model until it gets very good at predicting how a child’s speech will mature. Then he can use the model to teach a computer how to comprehend language.
For Jones, understanding the “basic mechanisms humans use to learn, represent, and use information” is exciting enough in itself. But his work naturally lends itself to practical applications across a wide range of fields. His research has recently been funded by Google to study how cognitive models could make search engines better able to understand what a user really means when she types in a search phrase. Historically, computer programs have relied strictly on text, but people also use vision, memory, and other information to solve problems, so Jones is working to help computers and mobile devices incorporate sensory information to improve their accuracy.
He is also working on healthcare and educational applications. One is a customizable computer system for elementary school classrooms that will provide quick, automated feedback to individual students about how well they are mastering the teacher’s lesson. Another is an early detection system for Alzheimer’s disease that relies on data showing the kinds of memory tasks people struggle with long before they show official symptoms. These applications are just the beginning, Jones says. He believes that many sectors of society can benefit from cognitive modeling, because when it comes to solving complex problems, the human brain is the smartest machine there is.
Michael Jones was recently honored with the Federation of Associations in Behavioral & Brain Sciences (FABBS) Foundation Early Career Investigator Award during the 2013 meeting of the Society for Computers in Psychology.