Watson on Jeopardy! And IBM's Decision to Put Him There

Of course one of the most prominent examples of congitive computing is Watson’s famous victory on Jeopardy!. Below is a video produced by Engadget that documents Watson’s appearance with Ken Jennings, Alex Trebek, et al. And below that is an excerpt from Smart Machines: IBM’s Watson and the Era of Cognitive Computing, by John E. Kelly and Steve Hamm. In the excerpt Kelly and Hamm reveal the fascinating story behind the creation of Watson and the decision to put it on Jeopardy!

Excerpt from Smart Machines:

The Watson project got its start in a surprising way. In the fall of 2004, IBM’s head of computing systems soft­ware, Charles Lickel, traveled from his home in Tucson to spend the day with a small team he managed at an IBM facility in Poughkeepsie, New York. At the end of the workday, the team gathered at the nearby Sapore Steak-house for dinner. They were bemused when, at seven p.m. sharp, many of the diners abruptly got up from their tables, rushed into the bar, and clustered excitedly around the TVs. One of Charles’s guys explained that they were watching long-time champion Ken Jennings defend his title on Jeopardy!

Charles hadn’t followed Jeopardy! for years, but the scene made an impression on him. A few months later, research director Paul Horn asked his lieutenants to think up a high-profile project that the lab could take on that would demonstrate IBM’s scientific and technological prowess. The company calls these its “grand challenges.” The previous grand challenge had been a huge success: IBM’s Deep Blue computer had beaten the world’s top chess grand master in a highly publicized match in the mid-1990s. But a lot of time had passed since that victory.

During one of the brainstorming sessions aimed at picking the company’s next grand challenge, Charles suggested building a computer that could compete on Jeopardy! IBM has long used man-versus-machine games to moti­vate scientists, focus research, and engage the public. In the early 1960s, IBM researcher Arthur Samuel, the AI pio­neer, created one of the first computer programs capable of learning when he wrote a checkers-playing program designed to run on the 701, IBM’s first commercial com­puter. Samuel challenged one of the top U.S. checkers champions to a match—and won. IBM researcher Gerry Tesauro in the late 1980s developed a program called TD-Gammon, which used a technique called temporal differ­ence learning to teach itself how to play backgammon. It was competitive in matches with some of the world’s top backgammon players.

Later, IBM turned the contests into true spectator sports. The 1997 match between Deep Blue and the reigning world chess champion, Garry Kasparov, was streamed live on a website, and millions of chess enthusiasts watched every move. The Deep Blue program didn’t require much learn­ing since chess is such a highly structured game, but, according to Murray Campbell, one of the researchers on the Deep Blue team, the match demonstrated that a combination of clever engineering and sophisticated algo­rithms could rival the performance of a top human expert in a specific domain of human achievement. “It made everybody understand in a clear way that problems previ­ously considered too hard for a computer could now be tackled,” Murray says.

Of course, while IBM has played an important role in the evolution of artificial intelligence and learning sys­tems, many other scientists and organizations have made major contributions. John McCarthy, a professor at Stan­ford University, coined the term “artificial intelligence” in 1955, and Marvin Minsky, a professor at MIT, has produced a long series of advances in the field since the 1950s. He’s now focusing on giving machines the ability to perform humanlike commonsense reasoning. Today, Andrew Ng of Stanford is leading a team of scientists in an attempt to create algorithms that can learn based on principles that the brain might also employ.

The field of artificial intelligence has advanced in starts and stops. Periods of soaring optimism have been fol­lowed by so-called AI winters, when seemingly promis­ing avenues of research failed to produce the anticipated results. Put simply, this is hard stuff. So it’s no surprise that when Charles Lickel proposed Jeopardy! as the next grand challenge, his suggestion was met initially with reac­tions ranging from skepticism to outright derision. But he quickly won Paul Horn’s support. Paul thought the proj­ect could be very exciting—both to computer scientists and the public at large. In mid-2006 Charles gave the go-ahead to researcher David Ferrucci, who was an enthusias­tic evangelist for the project, to explore whether building a machine that could win on Jeopardy! was even plausible.

Progress was extremely slow at first. For several years, Dave had been leading a group of researchers who had produced good results with question-answering technologies. However, their performance gains had plateaued. So Dave and a small team cast around for the right technol­ogy and strategy with which to approach the Jeopardy! challenge. By mid-2007, though, Dave was convinced the job could be done, and Charles and Paul gave the proj­ect their blessing. However, there was a danger that the project would never actually be realized as Paul retired from IBM soon after and became senior vice provost for research at New York University. His replacement as head of IBM Research was one of the authors of this book, John Kelly. Dave was terrified that John would kill the project, and his fears were not assuaged when, at their first meet­ing, John expressed deep reservations. At the time, the sys­tem, then called Blue J, was only able to answer 30 percent of previous Jeopardy! questions accurately. John told them: “Guys, we can’t put the IBM brand on TV with 30 percent accuracy.”

Some of the team members were discouraged. But Dave told them that he was convinced they could achieve their goals. He said he was so committed to the project that he was willing to risk failure and public humiliation. And he asked each of the twelve scientists who were then on the team to fully commit themselves, as well. “Are you ready for this?” he asked.

They were. Over the following weeks and months they gradually improved the technology to the point where, even though the gap was still huge, John was convinced
they could build a machine that could compete at the highest levels on Jeopardy! He saw that the effort could boost IBM’s reputation as an innovator and that the sci­ence would be transformative, opening up massive new opportunities for computing. He said: “I’ll give you what­ever you need in the way of resources, but, if we’re going to put the IBM brand on national TV, we must win. We must win.” That launched the intense effort that led ultimately to Watson’s victory.

On January 14, 2011, the Jeopardy! contest was conducted on stage in the IBM Research auditorium. Top IBM execu­tives, research scientists, and guests packed the room. Emotions ran high. John told the audience that he did not know if Watson would win, but he believed that the con­test represented an important moment in the history of computing. He said, “It’s not a matter of if a computer will one day win at Jeopardy!, but when.”

John was extremely nervous as the games began. He prayed that the system wouldn’t crash and that Watson wouldn’t make any embarrassing mistakes. During the final game, when it became clear to him that Watson would emerge victorious, he glanced across the room and caught Dave Ferrucci’s eye. Both smiled. They knew it was done. They had bet big and won.

Watson was not designed just for playing From the start, the goal was to create a technology platform that could be adapted to a wide variety of uses—a practical tool with the potential to transform business and society. Rather than trying to create a monolithic set of rules for analyzing data, the development team used many simpler algorithms that could be added to or mixed and matched depending on the task before Watson. They created an analytics program for weighing and evaluating evidence and conclusions, no matter what domain. And they made it possible to have experts in particular domains to con­tribute knowledge to the program. The result: “We created an architecture of discovery,” says Eric Brown, who heads IBM Research’s Watson team.

Leave a Reply