Uncategorized

Understanding and Selecting Cognitive Applications

This document is our first public draft of research we have done in conjunction with the Cognitive Computing Consortium and Babson College.

Our objective in this document is two-fold. Cognitive computing as an approach to human-machine problem solving is new and hence somewhat unfamiliar. Consequently, like any other new technology, there is a certain amount of hype and confusion that contribute to clouding its adoption. Our first objective is to briefly distinguish how AI and cognitive computing differ not only from each other but also from traditional information systems. Our second objective is to present a set of tools that will guide buyers and vendors of cognitive applications through a set of decision points.  Cognitive applications are new, and largely untested.  We hope that as you select, deploy and test these new applications, you will discuss your experience with them so that we can test the decision tools described in the downloadable document.  Please contact me at sue@synthexis.com with questions and comments. We hope that our work provides some clarity for this burgeoning field.

To Download, click here:  Understanding Cognitive Computing

Building a Cognitive Business

When IBM’s Watson burst upon the scene in 2011, little did we know that it would kick off a new category of computing. Since that time, IBM has drawn most of its major divisions into the cognitive fold. That’s no surprise: cognitive computing is the ultimate Venn diagram, drawing on hundreds of technologies, from AI to Zookeeper, in order to create systems that “interact, understand, reason, and learn.” It was apparent at the Watson Analyst Day on May 23rd that IBM’s message has been refined, and that it has begun to gel. Just as we in the Cognitive Computing Consortium have moved from a vague understanding that we had something fundamentally new, so too has IBM’s understanding of what cognitive computing is, and what it’s good for become much more solid.

Realizing that the complexity of cognitive solutions can be a barrier to entry, IBM Watson has begun to offer “App Starter Kits” around clusters of technologies that are pre-integrated, like conversation agents, business intelligence, or audio analysis.  But markets require more than a single vendor, and we have already seen the rise of new vendors that are not part of the Watson Partner constellation. Being able to mix and match platforms, apps, and technologies will require new standards for not just formats but also storage and terminology if all types of data are to be exchanged easily. Making Watson’s cloud-based cognitive services like sentiment extraction, NLP, predictive analytics, or speech-to-text on both Bluemix and Twilio is a good step in this direction. So are the emerging sets of tools to guide adopters through data selection and modeling, analytics selection, visualization choices, and interaction design.

Two years ago, IBM launched its Watson Division. It now has 550 partners in 45 countries, thousands of developers, and programs in conjunction with 240 universities. It continues to add new languages and services. This is the beginning of a market, but we believe that this phenomenon is bigger than a single technology market. Rather, IT will evolve from the current deterministic computing era to one that is more nuanced. We already see elements of cognitive computing creeping into new versions of older applications—more intelligent interactions, better, more contextual recommendations, In this new world, we will add probabilistic approaches, AI, predictive analytics, learning systems, etc., but we will also retain what works from the old. That calls for a much deeper understanding of which technologies solve what problems the most effectively. What kinds of problems demand a cognitive computing approach? The processes that IBM delineated as possible elements of a cognitive solution are:

  1. Converse/interact
  2. Explore
  3. Analyze
  4. Personalize
  5. Diagnose/recommend

They also emphasized the importance of data—curated, annotated data that is normalized in some way using ontologies for both categorization and reasoning. This should come as no surprise to those of us from the online industry, who know that there is no substitute for the blood, sweat and tears that go into building a credible, usable collection of information. The question today is how to do this at scale, and at least semi-automatically, using NLP, categorizers, clustering engines, and learning systems, training sets, and whatever other tools we can throw at this barrier to sense making.

By far, the biggest advances in cognitive applications have been made in healthcare. With good reason. Medicine has a long history of information curation. Advances in ontology building, controlled vocabularies (normalization) and categorization date back to the 1950’s. PubMed and its predecessors had already built multilingual online collections of medical publications, clinical data, toxicology, and treatment guidelines as early as the 1980’s. These resources predate IBM Watson health and have enabled it to address health information problems with an existing well-curated knowledge base. Healthcare requires extreme accuracy, big data analytics, advanced patient-doctor-machine natural interaction, and a probabilistic approach to solving a medical problem. Because the amount of possibly relevant information is staggering, and the outcome is a matter of life and death, the reasons for investment in cognitive systems are obvious for healthcare insurers and providers alike. There are also, of course, billions of healthcare dollars at stake. Customer engagement, retail sales, mergers and acquisitions, investment banking, security and intelligence are not far behind in their promise, but they lack that initial bootstrapping of existing knowledge bases.

In summary, cognitive computing is moving from dream to reality. New tools and more packaged applications have reduced the complexity and the time to deploy. Early adopters are still at the experimentation stage, but from IBM and other vendors and services firms, we see gradual adoption with associated ROI, a virtuous loop that attracts yet more buying interest.

2016: the tipping point for 3rd platform, says IDC

IDC’s Third Platform –the next computing generation—rests on cloud computing, big data and analytics, social business, and mobility. Together, these form a foundation to provide scalable anywhere-anytime-any device computing. As these trends become ubiquitous, they enable and accelerate the Internet of Things (IOT), cognitive systems, robotics, 3D printing, virtual reality, self-driving cars, or better security.

At the same time, this brave new world wreaks havoc on the old one of PC’s, client-server software and legacy apps. I would also add another disruptive ingredient to the mix—open source software, which is no longer for hobbyists and is now embedded in most new applications. IDC predicts that 2016 is the year in which spending on third platform IT will exceed that for the second platform, with a CAGR of 12.7% for 2015-2020. At the same time, they predict that second platform investment will be down 5%.

Their recent surveys show that, in terms of maturity, today most companies are in the digital exploration or early platform development phase, with 14% having no interest in digital transformation, and only 8% already using digital transformation to disrupt competitors or markets. That will change by 2020 as 50% of businesses will be using this platform to disrupt and transform.

Other predictions:

  • Business owners, not IT will control more of the IT budget
  • Health services and financial services are two of the top industries to invest, reaping the rewards of faster, cheaper, and more comprehensive uses of their data.
  • Other top applications now in the works include marketing and sales, retail, security, education, media and entertainment.
  • Technology will be embedded in most applications and devices.
  • Start-ups are rife, and the shakeup has not yet begun
  • Cognitive computing and AI is a requirement for developer teams—by 2018, more than 50% of developer teams will be using AI for continuous sensing and collective learning (cognitive applications and IOT).

Where does existing IT infrastructure play in this game? In our scramble as analysts to pin down trends, we often neglect the fact that existing systems and applications a still valuable. They may well be good enough for a given task or process, or they may continue to churn on, feeding into newer layers of technology stacks when appropriate. Unlike newer versions, the kinks have been worked out. The challenge for business and IT managers will be to distinguish between the promise of the new and the security of the old: when to invest, when to explore, and when to stand back and watch. Good question!

Click here or more information on IDC’s take on the 3rd Platform

What Does Watson Know About Me

IBM Watson’s Personality Insights claims it can deduce who you are based on 100 words of your writing.  Unable to resist, I entered the preface from my book, The Answer Machine, into IBM Watson Personality Insights service.

Here’s what Watson deduced:

You are shrewd.

You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are self-controlled: you have control over your desires, which are not particularly intense. And you are compromising: you are comfortable using every trick in the book to get what you want.

Your choices are driven by a desire for prestige.

You consider helping others to guide a large part of what you do: you think it is important to take care of the people around you. You are relatively unconcerned with tradition: you care more about making your own path than following what others have done.

*Compared to most people who participated in our surveys
One more question, Watson, do you ever give someone a negative analysis?
[FYI:  here’s what I pasted into the box that resulted in this analysis:

In 2011, a very clever machine from IBM named Watson defeated two human champions in the quiz game, Jeopardy. Watson is an answer machine, and its Jeopardy win was proof that it could be done. The press was immediately abuzz: would machines replace humans? Would we need teachers, programmers, or writers in the future? Could we automate doctors?

The short answer to these questions is no, we still need people. But the better question to ask is how to join man and machine so that we can address more complex problems than either can manage alone. Machines excel at performing repetitive tasks. They don’t get tired and they don’t get bored. They are good at crunching vast amounts of information to find patterns, whether they make sense or not. They have no emotional investment in theories, and are consistent to a fault. They don’t get embarrassed if they return the wrong answer. Machines, however, are very bad at making the intuitive leaps of understanding that are necessary for breakthrough thinking and innovation. Humans excel at this kind of thinking. People can balance the imponderables that are almost impossible to program: diplomacy, subtlety, irony, humor, politics, or priorities. People are good at making sense of data and synthesizing ideas. Above all, people can understand and make exceptions to rules. Machines can’t. We need people to make decisions, but we need machines to help us filter through more information in order to make better-informed decisions. People need this assistance because they are swimming in an overwhelming sea of information, and need time to think if they are to innovate and act wisely.

For this kind of help, new types of more “intelligent,” language-capable machines, like IBM’s Watson, are a necessity. Marrying intelligent machines with humans holds great promise: machines to do the repetitive work and forage through massive amounts of information looking for patterns and evidence to support or reject hypotheses, while humans supply the necessary judgment, intuition, and system override to determine which patterns make sense. This collaboration divides up the work into what each party—machine or human—does well. Anything that follows a predictable pattern is a good candidate for automation. Health care tasks are a good example of this duality: let the machine enter diagnostic codes based on existing rules. Let advanced information systems find the latest research on treating illnesses. Gathering information, organizing it, weighing the probability of its pertinence to a particular patient—this what a Watson does well. This frees clinicians to work with patients, assess the evidence and use it to improve patient care.

Watson is an answer machine. Part search engine, part artificial intelligence, part natural language technology, and stuffed with the specialized information to answer questions on a specific subject, be it medicine, finance, or Jeopardy. Answer machines sift through mountains of information to find patterns. Although they don’t make complex decisions that try to balance costs, emotional aspects, or ethics, they free up humans to do what machines can’t do: consider the factual and thenon-factual and then make well-informed choices. They provide better answers, faster, than current search engines do. Watson is one visible, well-publicized example of an answer machine, but there are many others that are arriving on the scene, albeit with less fanfare. Search engine technologies– the focus of this book – have undergone a metamorphosis of their own. The simple search engine of the 90’s, which matched keywords and phrases, has been transformed into a multifaceted access point to all kinds of information—in multiple formats and from a multitude of sources. Indeed, today the term “search engine” is a misnomer. Like Watson, search today comprises more technologies than keyword search: categorization, clustering, natural language processing, database technologies, analytical tools, machine learning, and more.

This book examines the metamorphosis of search from its awkward command line youth to the emergence of something much more complex, something I call an answer machine. The following chapters look at the role that information plays in the work and personal lives of people today, the tools we have developed to interact with digital information, and the future for these technologies as they move from engineering marvels to everyday tools.]

What makes a good data scientist?

Ken Rudin, Director of Analytics at Facebook, spoke at HP’s Big Data Conference in Boston today. He attacked the four myths of big data:

  • That you need Hadoop to attack big data
  • That big data gives you better answers
  • That data science is a science
  • That the reason for big data is to get “actionable insights” from the data.

Of course, there is a kernel of truth in all of these, but there are many tools that are useful in big data, and the answers you get from it are only as good as the questions you ask. Perhaps the most important point he made is that data science is both a science and an art. Those of us who have been in some part of the information industry for longer than we care to admit agree with him. You certainly need the tools, and being a whiz in the “how” of finding and analyzing information is important. That’s the science.

But it’s only half the battle. Knowing how to ask a good question is an art. Good askers of questions must be good listeners. They are steeped in the background of the organization. They absorb the underlying reasons for why information is needed, and how it will be used. Information analysis is a way station toward an action. It’s part of the process of gathering evidence to support a decision. If you just gather information for the sake of having it, it may be interesting, but it’s not useful.

What Rudin said is that our approach to why we gather information is evolving. It has moved from “Tell me our status” to “Tell me why it’s happening” to today’s, “What should I do about it?” But, he says, that’s not enough. Because you also have to decide to act on that recommendation in order to change a process, change a metric, change a policy or change behavior. People who can ask the right questions, balance the science and the art, and act on the conclusions will redefine the role of the data scientist or the analyst in the organization. And change the organization in the process.

We agree.

LifeLearn Sofie: A Cognitive Veterinarian’s Assistant

Like human medicine, veterinary medicine has leaped into the digital age, embracing big data, telemedicine, online access for customers, online education for practitioners, digital marketing, and social media. Both sets of practitioners are also under increasing pressure to handle more patients in less time, and to keep up with a growing body of research that becomes outdated quickly.

However, there are some key differences between human and animal medical practitioners. Complex as human medicine is, it still targets only one species. Veterinarians, however, must be prepared to deal with everything from anacondas to zebras, and conditions that range from general wellness and internal medicine to cardiology, oncology and beyond. And, their patients can’t talk.

LifeLearn is a spin-off from the University of Guelph’s Ontario Veterinary College in Ontario, Canada. When they were founded 21 years ago, it was with the goal of providing educational and support services, resources, technology and tools to veterinary practices. As the field has evolved, though, so have they. LifeLearn’s Innovations Group is betting on new technologies like digital monitoring devices for animals to provide solid data on patients.

When the chance to partner with IBM’s Watson came along, it seemed to Jamie Carroll, LifeLearn’s CEO and Dr. Adam Little that creating a better digital assistant could solve some of the problems that veterinarians face today. LifeLearn is one of the first partners selected by IBM Watson and is using the technology to develop a cognitive veterinary assistant, called LifeLearn Sofie™, that can ingest massive amounts of data, and forage in real time for clues and connections that will allow a veterinarian to diagnose an animal’s condition quickly and accurately. Like other Watson-based assistants for physicans being developed, LifeLearn’s Sofie is training a veterinary version of Watson that uses the information it has amassed and analyzed to generate evidence-based hypotheses and suggest the best treatment options.

Preparing the content for Watson has been a massive undertaking. Working with leading hospitals, LifeLearn has reduced that process from weeks to hours. The LifeLearn staff have also had to train Watson to answer nuanced complex questions for which there is no single right answer. For each topic, their Watson trainers must create a set of questions that would be germane to a vet working through a case. They are now able to produce 25,000 question/answer pairs per month.

LifeLearn has built not just the underlying knowledge base, but analyzed how veterinarians gather and use information. Based on their decades of experience, they have developed an interactive application that enables veterinarians to ask questions and receive the top answers that are scored for confidence. The system learns from each interaction, and from feedback from users, who are asked to score the responses for relevance, quality of information, appropriate length and depth of answers.

LifeLearn’s goal is to make Sofie a specialist in every corner of veterinary science. To succeed, they must uncover how veterinarians make decisions. But there is an additional challenge: to educate veterinarians to understand the promise and limitations of cognitive computing—that there is no right answer, only some that are more appropriate than others, given the patient, its owner, and the circumstances of the medical condition. Living with uncertainty and complexity, and providing guidance in how to do this as well as possible is the aim of applications like LifeLearn’s Sofie.

Modeling Search Behavior

Making information accessible is hard work. Certainly, there are new tools that can analyze massive amounts of data in order to bootstrap information management. However, there’s a point at which human expertise is required.

I just read a report on how and why to model search behavior from Mark Sprague, Lexington eBusiness Consulting, http://msprague.com. Mark has been in the search business as long as I have. He helps organizations understand what their customers are looking for, and what the impact of their information access/search design will have on their customers finding what they are seeking. The report I read discusses a consumer search behavior model he built for the dieting industry. In it, Sprague explains that a good search behavior model starts by gathering data on what users are searching for, but that’s just the beginning. Building a behavior model can affect your information architecture, the content you post on your site, how to incorporate the search terms customers use into the content, which featured topic pages that will attract views, the SEO strategy this model drives, and the changes that will result in existing PPC strategies. Sprague finds top queries, then uses them to generate titles and tags that fit the terms users are searching for—particularly the phrases. He also categorizes queries into a set of high-level topics with subtopics. These categories can and should affect the organization of a Web site, enabling users to browse as well as search.

Sprague has observed that at each stage of the online buying process, from research to deciding to purchasing, the query terms differ. This difference can be thought of as an indication of intent, and it can be used to tailor results for an individual as the user moves from one part of the process to the next. Finally, Sprague uses the terms to perform a cost-benefit analysis to improve SEO.

This thoughtful approach starts with observing user behavior and models the information architecture and Web site to fit—not the other way around. That’s smart, and it’s good business.