IBM Watson

Using AI in an Uncertain World

Like anything else in life except death and taxes (and even the particulars of these are uncertain), uncertainty is something that humans deal with every day.  From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty and we all have learned how to navigate an unpredictable world.  As AI becomes widely deployed, it simply adds a new dimension of unpredictability. Perhaps, however, instead of trying to stuff the genie back in the bottle, we can develop some realistic guidelines for its use.

Our expectations for AI, and for computers in general have always been unrealistic.  The fact is that software is buggy; that algorithms are crafted by humans who have certain biases about how systems and how the world works—and they may not match your biases.  Furthermore, no data set is unbiased, and we use data sets with built in biases or with holes in the data to train AI systems.  These systems are by their very nature, then, biased, or lacking in information.  If we depend on these systems to be perfect, we are letting ourselves in for errors, mistakes, and even disasters.

However, relying on biased systems is no different from asking a friend, who shares your world view, for information that may serve to bolster that view rather than balance it.  And we do this all the time.  Finding balanced, reliable, reputable information is hard and sometimes impossible.  Any person trying to navigate an uncertain world tries to make decisions based on balanced information. The import of the decision governs (or should) the effort we make in hunting for reliable but differing sources.  The speed with which a decision must be made often interferes with this effort. And we need to accept that our decisions will be imperfect, or even outright wrong, because no one can amass and interpret correctly everything there is to know.

Where might AI systems fit into the information picture?  We know that neither humans nor systems are infallible in their decision making. Adding the input of a well-crafted, well-tested system that is based on a large volume of reputable data, to human decision making can speed up and improve the outcome. There are good reasons for this because human thinking balances AI systems. They can plug each others’ blind spots.  Humans make judgments based on their world view.  They are capable of understanding priorities, ethics, values, justice, and beauty.  Machines can’t.  But machines can crunch vast volumes of data. They don’t get embarrassed.  They may find patterns we wouldn’t think to look for.  But humans can decide whether to use that information.  This makes a perfect partnership in which one of the partners won’t be insulted if their input is ignored.

Adding AI into the physical world in which snap decisions are required, raises additional design and an ethical issues which we are ill-fit to resolve today.  Self driving cars are a good example of this.  In the abstract, and at a high level, it’s been shown that most accidents and fatalities are due to human error.  So, self-driving cars may help us save lives.  Now we come down to the individual level. Suppose we have a sober, skilled, experienced driver who would recognize a danger she has never seen before. Suppose that we have a self driving car that isn’t trained on that particular hazard. Should the driver or the system be in charge?  I would opt for an AI assisted system with override from a sober, experienced driver. On the other hand, devices with embedded cognition can be a boon that changes someone’s world. One project at IBM research is developing self driving buses to assist the elderly or the disabled live their lives independently. Like Alexa or Siri on a smaller scale, this could change lives. We come back to the matter of context, use, and value. There is no single answer to human questions of “should.”

This brings us to the question of trust. Should we trust AI systems and under what circumstances? That depends on:

  • The impact of wrong or misleading information:       Poor decisions? Physical harm? Momentary annoyance?
  • The amount and reliability of the data that feeds the system
  • The goals of the system designers:  are they trying to convince you of something?  Mislead you?  Profit from your actions?
  • The quality of the question/query

Is there some way to design systems so that they become an integral part of our thinking process, including helping us develop better questions, focus our problem statements, and reveal how reliable their recommendations are? Can we design systems that are transparent? Can we design systems that help people understand the vagaries of probabilistic output? Good design is the key—within the context of the use and the user.

Building a Cognitive Business

When IBM’s Watson burst upon the scene in 2011, little did we know that it would kick off a new category of computing. Since that time, IBM has drawn most of its major divisions into the cognitive fold. That’s no surprise: cognitive computing is the ultimate Venn diagram, drawing on hundreds of technologies, from AI to Zookeeper, in order to create systems that “interact, understand, reason, and learn.” It was apparent at the Watson Analyst Day on May 23rd that IBM’s message has been refined, and that it has begun to gel. Just as we in the Cognitive Computing Consortium have moved from a vague understanding that we had something fundamentally new, so too has IBM’s understanding of what cognitive computing is, and what it’s good for become much more solid.

Realizing that the complexity of cognitive solutions can be a barrier to entry, IBM Watson has begun to offer “App Starter Kits” around clusters of technologies that are pre-integrated, like conversation agents, business intelligence, or audio analysis.  But markets require more than a single vendor, and we have already seen the rise of new vendors that are not part of the Watson Partner constellation. Being able to mix and match platforms, apps, and technologies will require new standards for not just formats but also storage and terminology if all types of data are to be exchanged easily. Making Watson’s cloud-based cognitive services like sentiment extraction, NLP, predictive analytics, or speech-to-text on both Bluemix and Twilio is a good step in this direction. So are the emerging sets of tools to guide adopters through data selection and modeling, analytics selection, visualization choices, and interaction design.

Two years ago, IBM launched its Watson Division. It now has 550 partners in 45 countries, thousands of developers, and programs in conjunction with 240 universities. It continues to add new languages and services. This is the beginning of a market, but we believe that this phenomenon is bigger than a single technology market. Rather, IT will evolve from the current deterministic computing era to one that is more nuanced. We already see elements of cognitive computing creeping into new versions of older applications—more intelligent interactions, better, more contextual recommendations, In this new world, we will add probabilistic approaches, AI, predictive analytics, learning systems, etc., but we will also retain what works from the old. That calls for a much deeper understanding of which technologies solve what problems the most effectively. What kinds of problems demand a cognitive computing approach? The processes that IBM delineated as possible elements of a cognitive solution are:

  1. Converse/interact
  2. Explore
  3. Analyze
  4. Personalize
  5. Diagnose/recommend

They also emphasized the importance of data—curated, annotated data that is normalized in some way using ontologies for both categorization and reasoning. This should come as no surprise to those of us from the online industry, who know that there is no substitute for the blood, sweat and tears that go into building a credible, usable collection of information. The question today is how to do this at scale, and at least semi-automatically, using NLP, categorizers, clustering engines, and learning systems, training sets, and whatever other tools we can throw at this barrier to sense making.

By far, the biggest advances in cognitive applications have been made in healthcare. With good reason. Medicine has a long history of information curation. Advances in ontology building, controlled vocabularies (normalization) and categorization date back to the 1950’s. PubMed and its predecessors had already built multilingual online collections of medical publications, clinical data, toxicology, and treatment guidelines as early as the 1980’s. These resources predate IBM Watson health and have enabled it to address health information problems with an existing well-curated knowledge base. Healthcare requires extreme accuracy, big data analytics, advanced patient-doctor-machine natural interaction, and a probabilistic approach to solving a medical problem. Because the amount of possibly relevant information is staggering, and the outcome is a matter of life and death, the reasons for investment in cognitive systems are obvious for healthcare insurers and providers alike. There are also, of course, billions of healthcare dollars at stake. Customer engagement, retail sales, mergers and acquisitions, investment banking, security and intelligence are not far behind in their promise, but they lack that initial bootstrapping of existing knowledge bases.

In summary, cognitive computing is moving from dream to reality. New tools and more packaged applications have reduced the complexity and the time to deploy. Early adopters are still at the experimentation stage, but from IBM and other vendors and services firms, we see gradual adoption with associated ROI, a virtuous loop that attracts yet more buying interest.

Sparking Innovation: Dynamic Technologies for a Dynamic Process

[The following article is a transcript of a video of Sue Feldman’s keynote session at KMWorld 2015 in Washington, DC. View the full session video at http://www.kmworld.com/Articles/Editorial/ViewPoints/Cognitive-Computing-and-Knowledge-Management-Sparking-Innovation-108929.aspx?PageNum=2]

Download the slides: Sparking Innovation

Innovation is perhaps the biggest test of a knowledge management system. We’re used to capturing information. We’re used to locking it down. We’re used to accumulating it. We’re used to creating some kinds of access to it. Innovation makes us go far beyond that. What I’ll talk to you about today is what innovation is. What the process for coming up with a new idea actually is. Then I would like to talk to you about cognitive computing because I think that it solves some of the problems that our older, traditional technologies cannot really address adequately. I’ll end by talking a little bit about where I think knowledge management has to go.

Let me start by telling you a story. Once upon a time, there was a biologist and physicist and they went for a walk. The first thing they started to do is to fall into a conversation about a fairly arcane subject: DNA. The physicist was interested in the electrical properties of DNA. The biologist knew a fair amount about that because he was also a chemist and something of an inventor. They talked and they walked and then the physicist went back home, continue to ask questions, do research, etc. The biologist kept on sending information but really went back to what he liked to do best which was tinkering with ideas and things because he was something of an inventor. After several years of research, the biologist, Esther Conwell, won the National Science medal in 2010 for her work on the conductive properties of DNA and how to enhance those because she was interested in semi-conductors. The inventor was my Dad, and that’s the kind of thing that he enjoyed, which I think is a story of what happens when you have two innovative, open-minded people.

Ingredients of Innovation

Let’s take a look at the ingredients. First of all, you need a problem or research direction. In this case it was semi-conductors. You also need opportunity. You need cross-fertilization, in this case biology and physics, which are adjacent but certainly not congruent. You need colleagues who, like you, interested in discussing. In the research that I’ve done over the years on the process of innovation, I’ve found that innovative inventions of various kinds, and discoveries tend to be sparked by good food and a bottle of wine. It’s almost a requirement. You need curiosity. You need that serendipitous encounter to create the “aha moment,” a happy accident. You also need information and you need support both in the sense of an organization willing to let you model around and support in the sense of the information that is provided to you.

What is innovation? Well, it’s a lot of things. When President Obama presented the medal to Esther Conwell, he said that innovation is fueled by a combination of caffeine and passion. Obsession actually. Certainly, it requires a new idea, but it’s rarely entirely novel. It builds on what came before and that should be of importance to knowledge managers. Game-changing innovations occur at the boundaries between subjects and organizations. It’s a group effort rather than an individual one. Developers, users, partners, and colleagues all have a part in it because they provide not just the ideas but also the need that spurs the innovator to solve a problem. It tends to occur at the lower levels of organizations. Those of us who are at the top of the organization, beware. It may disrupt industries or companies for good or ill and it is both risky and rewarding. That’s innovation.

Supporting Innovation

What’s the business case for supporting innovation? Because very often it doesn’t pay off. Those of you who have R&D departments know that that’s the case. First, revenue. If you’re successful, it drives revenue because you are first to market. That means you are able to dominate that market and in fact that’s what’s happening with cognitive computing right now. You can attract and keep customers, build customer loyalty and market buzz, shape that market the way you want. It helps you avoid disruption and stay competitive. It helps you expand into new markets.

By creating a fertile environment for R&D, you also have a pipeline of new ideas to avoid stagnation and being bypassed by competitors. You attract outstanding researchers who soon burn out and leave if you don’t provide them with that kind of organizational support and latitude because innovation is a fragile flower. It gets trampled very easily.

On my second job, a very long time ago, I got hired by someone who called me in after two weeks and said, “I heard that you are innovative, Susan. You haven’t had any ideas yet.” She was right. I never had another one for her.

The Innovation Process

What is the innovation process? It’s quite different from what goes on in knowledge management normally. First, you have to have that idea or interest. There’s no question about that. You have open discussions, wide readings, you bump into people, you talk to them in the hallways, you go out to dinner with friends who are not in the organization, and gradually you discover that there is a need, which you find intriguing.

This is a very individual process even though it requires other people. You define the problem. You eliminate some of the common ideas. You discover that other people have been there before you and you give up and do something else. Then something interesting happens: you’ve taken in all this information, you’ve stuffed it into your head, and you let it simmer.

We had a graphic designer who is tremendously innovative. He used to go home and take a bath. I’d go for walks. Other people do other things. They knit. They cook. They garden. Whatever it is, they have to distract the front of their brain so that the back of the brain can allow that ferment to happen and that’s great fun. But if you have too tight a deadline, you’re not going to follow that elusive idea which is half-formed because you don’t have time for it. You have to meet the deadline and the idea gets squashed. That’s a very important thing for organizations to understand. These people who are innovators need some direction, but they also need a great deal of latitude and freedom as well.

You have to explore broadly. (This is where cognitive computing and knowledge management coincide, as I’ll discuss later). You have to filter and winnow and focus and rethink and iterate and go back to the beginning and start all over again. Finally, you have something concrete enough to develop and off you go, maybe. You find the problem, do research on it, then go off and develop. You commercialize it, you throw it into the market place and you see what the consequences are–big revenue, losses, whatever it turns out to be.

You identify the problem by talking to customers, talking to colleagues, talking to sales people, and talking to other researchers. Coming back is very iterative, as most of you know. You do research and you redefine the problem.

Again, you iterate. Test it on the market like at social media. Do competitive intelligence. Then you might commercialize it and see what happens after that.

Discovering What We Don’t Know

There is set of information tasks that we try to support with knowledge management, research, and text analytics. Any sort of information access and management tool is aiming to support all of these tasks, but the tools rarely do. The problem is that we have separate tools. The creation tools may not be well integrated into the process. If they are, the fact is that in innovation we’re on the phone, we’re sending emails, we’re discussing in the hallways. We’re not capturing that.

The reasons why we make decisions and change directions are poorly known and can’t be modeled for the process to happen again. We’re losing information that’s falling off the table. We’re pretty good at finding in some ways. We’re not so good at discovering what we don’t know and uncovering patterns we don’t know enough to look at. That discovery and uncovering are key to innovation, because what we want is to find out what we don’t know so that we can invent it. We’re pretty good at analyzing information and getting better. The discussion is very often not integrated into this whole picture and the decision-making is fairly diffuse. These are information tasks that we need to be able to support.

The Role of Information and Analysis Tools

The role of information access and analysis tools in this case is to improve exploration and discovery, to introduce related information. Although we want related information, we don’t want all the information in the world.

How do we manage to promote those happy accidents without burying the searcher? We have to help with the information-finding process to eliminate queries perhaps in favor of exploration of some sort. We have to help. This is where our traditional systems also fall down, in helping the user to frame the question broadly, helping the user understand how to ask for the information they need if they don’t know they need it. We used to have knowledgeable intermediaries who did a lot of this, but that’s not what’s happening today.

The tools have to help us understand and discover unexpected relationships across all sources of information. They need to search on a concept level rather than on keywords because those are also limitations. They need to unite multiple sources of information no matter what format they’re in or where they reside. They need to collect and share and discuss. They need to enable information and people to interact in one place. Then of course they need to save us time so that we can look at enough information in order to have those ideas. That tools that have started to emerge over the last couple of years are key to supporting these expanded roles for knowledge management. Cognitive systems are the next logical step.

As an analyst, I’ve been watching the markets develop all kinds of tools: business intelligence, search, text analytics, graphics of various kinds, reporting tools, creation tools, and drawing tools. They all solve a piece of a problem.

We used to call that “Sneakernet.” The Sneakernet that goes on in the creative and innovation process is overwhelming. It’s a tremendous waste of time because it means you’re constantly rummaging back through stuff that you did 10 years ago because you know you did it already. In fact, when I was preparing this talk about innovation, I had to go back to research I did 10 years ago because I knew I’d done something about this, but I really didn’t remember where it was. It was really hard to find it; desktop search is terrible. Yet, there it was in the back of my head.

Enter Cognitive Computing

See separate article in KM World, “What is Cognitive Computing”,

http://www.kmworld.com/Articles/Editorial/ViewPoints/What-is-Cognitive-Computing-108931.aspx

Cognitive computing is going to bring us another step closer to solving some of these problems. What is cognitive computing? Last year I brought together a team of 14 or 15 people to try to define it before marketplace hype completely screwed up any idea of what it was. I don’t know if we’re succeeding or not.

What are the problems that cognitive computing attacks? They’re the ones that we have left on the table because we can’t put them into neat rows and columns. They’re ambiguous. They’re unpredictable. They’re very human. There’s a lot of conflicting data. There’s no right and wrong, just best, better, and not such a good idea but maybe. This data requires exploration not searching. You just have to keep poking at it and shifting things around.

When I’m at the beginning of a project, I find myself jotting down ideas and then arranging them on a large table because sometimes they fit together one way and sometimes they fit together another. You need to uncover patterns and surprises, and computers are very good at this because they don’t get embarrassed by wrong ideas. Although they all have the biases that they get from their programmers, their biases are different from yours.

The situation is shifting as well. As we learn more, we change our focus and our goals. We go back and ask the same question that we often do, but we do so hoping for different results because we’ve already learned that stuff is not so easy in today’s systems. If you go to Google and ask the same question, you’re not always going to get the same answers, but you’ll get similar answers. But if you were looking for pictures of Java because you’re planning your vacation today and two months from now you want flights, the system won’t know your progress and your decisions.

The Value of Context

How do we make a cognitive system into a partner so that it keeps track of who we are and what we want to know at this time? It gives best answers based on who you are, where you are, what you know, what you want to know, and when you want to know it. It is very individually focused. Its aim is problem solving beyond information gathering. It gives recommendations based on who you are. I want to give you a couple of examples because context, we have found, is one of the key differentiators of a cognitive system.

In 2011, IBM designed a computer named “Watson” that won Jeopardy against two human champions. That was the beginning of cognitive computing. (You can see it on YouTube: https://www.youtube.com/watch?v=Puhs2LuO3Zc).

For me, as a person who has been in the chaotic world of search and text analytics all my life, it was a validation that the kinds of things that we do–the search index as opposed to the database–were actually really useful for very complex problem-solving. That was the beginning.

For another example, think about patient care. The emphasis on who, what, where, and when you are is one of the differentiators for cognitive computing. We all need slightly different slants on the same question. Let’s say we have a patient who has a disease. We know his genetic makeup, his age, his history of smoking, that he has certain allergies, etc. We also know where he is, what kind of access he has to medical care. We also have access to enormous amounts of information especially in health care and possible treatments and confidence scores. How does this change health care, because this is life and death?

Today, in standard health care, if you have a disease, or a particular kind of tumor, there are treatment guidelines. It doesn’t matter if you’re black, white, female, male, young, or old. That’s how you treat them. That’s not the way it needs to work. Instead, imagine you have all that information–more information than any doctor can amass in his head–and you’ve ingested it and you can start to match that person as a query, against that information and all of the applicable drugs side effects and what’s known from clinical trials. You come up with 2 or 3 treatments. Maybe the system says, “Have you considered that if you did this test we would have more confidence in recommending?” It’s a dialogue now. It’s a dialogue that supports the doctor and the patient in their decision on a treatment and that’s the kind of medical care I want. That’s another kind of context.

Suppose you’re an investor. In that case the context is for the portfolio, the personality. Are you conservative? Are you a risk-taker? How old are you? Do you want a lot of data or do you just want to be told what to invest in? Are you an influencer? How old are you? What’s your previous investment history? What are the market trends? What is your investment strategy?

All of those things need to be taken into account. That’s what human investment advisors do, but they’re not all-knowing. Starting with the evidence, the information, and then the ability to make a better judgment instead of a gut, intuitive decision is a very good idea–especially if it’s your money.

Consider the company, CustomerMatrix. They have a sales application. It sits on top of sales force. They do a lot of this. They look at who the sales person is. They look at who the manager is, who the strategist is for the company. They give different answers, but the thing that I’m fascinated by is that they also have ingested your business goals and your business strategy. They will make recommendations according to the usefulness of approaching one prospect or another for acquisition, or sales, or another department based on how a positive outcome will influence the business of the entire company as opposed to just that sales person’s commissions. It’s not a bad idea.

Another example is ExpertSystem. I’m just showing you that this can be very familiar. These systems do the usual text analytics things. They extract sentiment. One example is about a cat that was a popular resident one of the train stations. But they extract things like sadness and give people who are writing news, for instance, a very good idea by comparing the number of hits, number of readers, and number of Tweets, etc., against the kinds of extractions they’ve done already in order to understand better what readers are looking for. This is kind of a building block and the context is this particular event.

Elements of Innovation

What are the elements of innovation? People, collaboration tools, access tools, information of different types, and a work environment that is designed for cross-fertilization.

What kills innovation? Lack of organizational support, party-line thinking, no time to think, too-rigid innovation systems, lack of encouragement of innovation, poor or limited information and information access, and of course, information overload. We want lots of information but we don’t want too much. That’s a tall order for knowledge management.

Re-imagining KM

Can we re-imagine knowledge management? What can we do to give us a sort of informed serendipity? How do we do this? Cognitive computing can help with this, but there are some changes we need to make–not just in our systems and our tools, but also in our thinking. To bring us back to that DNA metaphor, we need to get away from structures in some cases. They’re useful, but not only do we need to capture information and conserve it, we also need to cut it loose. We need to loosen our grip on the information bits that are attached to those taxonomies, that are contained on those cells of the databases, and that are in the document and the text analytics systems categorized to within an inch of their lives.

Let them loose. Let them float around, bump into each other, and give innovators the opportunity to create their own information soup, if you will, to explore without forcing them into the structures that we have created. Because what they want to do is to find the unexpected by creating schemas and taxonomies we are giving them what we expect in terms of how information works. This is a tall order for knowledge management, and I leave it with you as a challenge and a question.

Welltok: A Cognitive Health Coach

We can all use a good personal assistant, one that keeps our health in mind, not just our appointments. This assistant needs to understand who we are today: our current state of mind, our location and our preferences. Recommendations on how to keep fit in July won’t work in January if you are snowed in with the flu. Instead, we need a sympathetic advisor who urges chicken soup instead of cookies, and suggests a hot shower, a nap and perhaps some gentle stretches for the aches.

This post may seem a far cry from our normal focus on cognitive computing, but in fact, it showcases one of the major leaps forward that cognitive computing will promote: true individualized recommendations that are presented within the framework of who you are, where you are, how you’re feeling, and what you like to do. Over the last two years, healthcare in particular has moved into the world of big data in order to provide individualized recommendations that are backed up with sound evidence. From cancer diagnoses to congestive heart failure, vast amounts of data have been mined to uncover new treatments or prevent hospital readmissions.

Cognitive computing is also moving into disease prevention. Welltok® rather than focusing on disease and diagnoses, has developed a Health Optimization Platform™, CaféWell®, to help healthcare plans, providers and employers keep consumers healthy and reward healthy behavior. The platform is a well-integrated combination of curated health and nutrition information and social and gaming technologies that drive consumer engagement.

To deliver more individualized health programs, Welltok partnered with IBM Watson in 2014 to add cognitive computing capabilities, thereby a personalized experience for consumers. The CaféWell® Concierge application powered by IBM Watson learns constantly from its users, so that it evolves to offer better, more appropriate suggestions as each individual uses the system. Jeff Cohen, Welltok’s co-founder and lead for their IBM Watson project, tells us that their goal is to make their existing platform more intelligent about each member’s health conditions and context. CaféWell strives to answer the question, “What can I do today to optimize my health?” for each of its members.

To accomplish this goal, Welltok starts with good information on health, exercise and nutrition—from healthcare systems and well-respected structured and unstructured data sources. It factors in individual information about health status, available benefits, demographics, interests and goals. The IBM Watson technology parses and processes this information to find facts, patterns and relationships across sources, using a proprietary Welltok approach. Welltok also adds its taxonomy of healthcare concepts and relationships. Then it creates question-answer pairs to train the system. These query-answer pairs are a key ingredient to help Watson enrich implicit queries. Welltok also provides navigation so that users don’t get lost as they seek answers. Free-flowing dialog between the user and the system is one of the earmarks of a cognitive application, but users need hints and choices in order to avoid frustration. Welltok provides these, constantly updating and retraining the system as it learns to predict pathways through the information. The information is filtered for each member’s health plan coverage and individual profile. Cognitive computing also incorporates temporal and spatial facets, so that the recommendations are suitable for the user’s time and place. This all eliminates information dead ends because it prevents inapplicable information from being displayed.

In addition to relevance, members are given incentives to participate and they are rewarded as they pass certain milestones. More importantly, the system learns their preferences and what motivates them to be healthy. For example, if you are only interested in exercising in groups, that’s what will be recommended, but if you prefer walks in the woods, you’ll instead get tips, perhaps, on places to walk or find mileage and terrain for common routes.

The Welltok use of cognitive computing has all the earmarks of a cognitive system. It’s dynamic and it learns. It parses both information sources and the user’s situation deeply, and matches the individual to the information and the recommendations. It is interactive, and it devours data—the more, the better.

One of the most fertile areas of development for cognitive applications is in this area of intelligent personal advisors. Suggestions for actions that are tailored to who you are make it more likely that you will try them. Now, where did I put the chicken soup?

The Watson Developer Challenge: Why mobile applications must be smarter

By their very nature, good mobile applications must be smarter.  The physical limitations—the small screen, or the input mechanisms (1-2 fingers or unpredictable voice recognition) mandate that a mobile app anticipate what you want to do and make it easy to get there.  No drop down boxes, no multiple queries, not much scrolling. Certainly very little clicking to get to a new screen.  Or a chain of multiple queries when the first one is off the mark. Forget cut and paste. For these reasons, mobile applications must be both smarter at understanding what you want and intelligently designed. That’s hard.

Enter cognitive computing.  If an application can really understand what the user intends, if it can classify questions and predict the kind of answer or action needed, then there will be less burden placed on the user to adapt to the limitations of the app.  But cognitive computing requires real language understanding (NLP) as well as machine learning and classification.  It also requires a corpus of examples to learn from. This is a level of technical prowess that would be impossible to develop for most start ups. Enter IBM’s Watson. Watson Foundations was released last quarter.  And now IBM has announced the Watson Mobile Developer Challenge.  This contest invites app developers to submit a proposal to develop an application on the IBM Watson platform.  Developers must make a case for what they propose, demonstrating why it would be valuable. Winning apps will capitalize on Watson’s strengths:

  • Have a question and answer interaction pattern, with questions posed in natural language
  • Draw on mostly unstructured (text) information for answers
  • Return answers that are ranked according to their pertinence to the question
  • Benefit by better understanding (analysis) of the type of question being submitted

The catch is that applications are due by March 31st.

This contest brings cognitive computing within the reach of developers.  Watson supplies the NLP tools, question analysis, machine learning, and confidence scoring that would otherwise place cognitive computing beyond the reach of most vendors. For more information, see IBMWatson.com. The application and rules can be found at:

http://www.ibm.com/smarterplanet/us/en/ibmwatson/form_challenge.html?cmp=usbrb&cm=s&csr=watson.site_20140226&cr=dev&ct=usbrb301&cn=sec5cta