IBM Watson

Building a Cognitive Business

When IBM’s Watson burst upon the scene in 2011, little did we know that it would kick off a new category of computing. Since that time, IBM has drawn most of its major divisions into the cognitive fold. That’s no surprise: cognitive computing is the ultimate Venn diagram, drawing on hundreds of technologies, from AI to Zookeeper, in order to create systems that “interact, understand, reason, and learn.” It was apparent at the Watson Analyst Day on May 23rd that IBM’s message has been refined, and that it has begun to gel. Just as we in the Cognitive Computing Consortium have moved from a vague understanding that we had something fundamentally new, so too has IBM’s understanding of what cognitive computing is, and what it’s good for become much more solid.

Realizing that the complexity of cognitive solutions can be a barrier to entry, IBM Watson has begun to offer “App Starter Kits” around clusters of technologies that are pre-integrated, like conversation agents, business intelligence, or audio analysis.  But markets require more than a single vendor, and we have already seen the rise of new vendors that are not part of the Watson Partner constellation. Being able to mix and match platforms, apps, and technologies will require new standards for not just formats but also storage and terminology if all types of data are to be exchanged easily. Making Watson’s cloud-based cognitive services like sentiment extraction, NLP, predictive analytics, or speech-to-text on both Bluemix and Twilio is a good step in this direction. So are the emerging sets of tools to guide adopters through data selection and modeling, analytics selection, visualization choices, and interaction design.

Two years ago, IBM launched its Watson Division. It now has 550 partners in 45 countries, thousands of developers, and programs in conjunction with 240 universities. It continues to add new languages and services. This is the beginning of a market, but we believe that this phenomenon is bigger than a single technology market. Rather, IT will evolve from the current deterministic computing era to one that is more nuanced. We already see elements of cognitive computing creeping into new versions of older applications—more intelligent interactions, better, more contextual recommendations, In this new world, we will add probabilistic approaches, AI, predictive analytics, learning systems, etc., but we will also retain what works from the old. That calls for a much deeper understanding of which technologies solve what problems the most effectively. What kinds of problems demand a cognitive computing approach? The processes that IBM delineated as possible elements of a cognitive solution are:

  1. Converse/interact
  2. Explore
  3. Analyze
  4. Personalize
  5. Diagnose/recommend

They also emphasized the importance of data—curated, annotated data that is normalized in some way using ontologies for both categorization and reasoning. This should come as no surprise to those of us from the online industry, who know that there is no substitute for the blood, sweat and tears that go into building a credible, usable collection of information. The question today is how to do this at scale, and at least semi-automatically, using NLP, categorizers, clustering engines, and learning systems, training sets, and whatever other tools we can throw at this barrier to sense making.

By far, the biggest advances in cognitive applications have been made in healthcare. With good reason. Medicine has a long history of information curation. Advances in ontology building, controlled vocabularies (normalization) and categorization date back to the 1950’s. PubMed and its predecessors had already built multilingual online collections of medical publications, clinical data, toxicology, and treatment guidelines as early as the 1980’s. These resources predate IBM Watson health and have enabled it to address health information problems with an existing well-curated knowledge base. Healthcare requires extreme accuracy, big data analytics, advanced patient-doctor-machine natural interaction, and a probabilistic approach to solving a medical problem. Because the amount of possibly relevant information is staggering, and the outcome is a matter of life and death, the reasons for investment in cognitive systems are obvious for healthcare insurers and providers alike. There are also, of course, billions of healthcare dollars at stake. Customer engagement, retail sales, mergers and acquisitions, investment banking, security and intelligence are not far behind in their promise, but they lack that initial bootstrapping of existing knowledge bases.

In summary, cognitive computing is moving from dream to reality. New tools and more packaged applications have reduced the complexity and the time to deploy. Early adopters are still at the experimentation stage, but from IBM and other vendors and services firms, we see gradual adoption with associated ROI, a virtuous loop that attracts yet more buying interest.

IBM’s Watson Expands its Toolbox: Acquires AlchemyAPI

With its acquisition last week of AlchemyAPI, IBM’s Watson Group added new, tools and expertise to its already-rich and growing array. Alchemy API’s technology complements and expands the core IBM Watson features. It collects and organizes information with little preparation, making it a quick on-ramp for building a collection of information that is sorted and searchable. It works across subject domains, and doesn’t require the domain expertise that the original Watson required. Its unsupervised deep learning architecture is designed to extract order from large collections of information, including text and images, across domains.

In contrast, the original Watson tools used to understand, organize and analyze information demands some subject expertise. For best results, experts are required to build ontologies and rules for extracting facts, relationships and entities from text. The result is a mind-boggling capability to hypothesize, answer questions, and find relationships, but it takes time to build and is specific to a particular domain. That is both good and bad, because they provide a depth of understanding, but at a significant cost in terms of time to get up and running. The Watson tools are also text-centered, although significant strides have been made to add structured information as well as images and other forms of rich media.

AlchemyAPI was designed to solve precisely these problems. It creates a graph of entities – and the relationships among them, with no prior expectations for how this graph will be structured. It is entirely dependent on what information is in the collection. Again, this is both good and bad. Without subject expertise, topics that are not strongly represented in the collection may be missing or get short shrift. Both approaches have their limits, as well as their advantages. Experts add a level of topic understanding—of expectations—of what might be required to round out a topic. Machines don’t. But machines often uncover relationships, causes and effects, or correlations that humans might not expect. Finding surprises is one of the strongest arguments for investing in big data and cognitive computing.

In this acquisition, Watson continues the path that helped it win Jeopardy!—by combining every possible tool and approach that might increase understanding. IBM can now incorporate multiple categorizers, multiple schemas, multiple sources, and multiple views and then compare the results by the strength of their evidence. This gives us more varied and rich results since each technology contributes something new and crucial. Like the best human analysts, the system collects evidence, sorts through it, weighs it, and comes to more nuanced conclusions.

The Watson platform adds a major piece to information systems that is often unsung. It orchestrates the contributions of the technologies so that they support, balance and inform each other. It feeds back answers, errors, and user interactions to the system so that Watson learns and evolves, as a human would. In this, it removes some of the maddening stodginess of traditional search systems that give us the same answers no matter what we have learned. In seeking answers to complex, human problems, we need to find right answers, perhaps some wrong answers to sharpen our understanding, and certainly the surprises that lurk within large collections. We want a system that evolves and learns, not one that rests on the laurels of a static, often outdated ontology.

Mirroring this technology architecture, the IBM’s Watson Group similarly requires a group of closely knit, strong minded people who are experts in their separate areas of language understanding, system architecture, voting algorithms, user interaction, probability, logic, game theory, etc. Alchemy contributes its staff of deep learning experts, who are expected to join the Watson Group. It also brings its 40,000 developers worldwide, who will broaden the reach and speed the adoption of cognitive computing.

The Watson Developer Challenge: Why mobile applications must be smarter

By their very nature, good mobile applications must be smarter.  The physical limitations—the small screen, or the input mechanisms (1-2 fingers or unpredictable voice recognition) mandate that a mobile app anticipate what you want to do and make it easy to get there.  No drop down boxes, no multiple queries, not much scrolling. Certainly very little clicking to get to a new screen.  Or a chain of multiple queries when the first one is off the mark. Forget cut and paste. For these reasons, mobile applications must be both smarter at understanding what you want and intelligently designed. That’s hard.

Enter cognitive computing.  If an application can really understand what the user intends, if it can classify questions and predict the kind of answer or action needed, then there will be less burden placed on the user to adapt to the limitations of the app.  But cognitive computing requires real language understanding (NLP) as well as machine learning and classification.  It also requires a corpus of examples to learn from. This is a level of technical prowess that would be impossible to develop for most start ups. Enter IBM’s Watson. Watson Foundations was released last quarter.  And now IBM has announced the Watson Mobile Developer Challenge.  This contest invites app developers to submit a proposal to develop an application on the IBM Watson platform.  Developers must make a case for what they propose, demonstrating why it would be valuable. Winning apps will capitalize on Watson’s strengths:

  • Have a question and answer interaction pattern, with questions posed in natural language
  • Draw on mostly unstructured (text) information for answers
  • Return answers that are ranked according to their pertinence to the question
  • Benefit by better understanding (analysis) of the type of question being submitted

The catch is that applications are due by March 31st.

This contest brings cognitive computing within the reach of developers.  Watson supplies the NLP tools, question analysis, machine learning, and confidence scoring that would otherwise place cognitive computing beyond the reach of most vendors. For more information, see IBMWatson.com. The application and rules can be found at:

http://www.ibm.com/smarterplanet/us/en/ibmwatson/form_challenge.html?cmp=usbrb&cm=s&csr=watson.site_20140226&cr=dev&ct=usbrb301&cn=sec5cta