Understanding and Selecting Cognitive Applications

This document is our first public draft of research we have done in conjunction with the Cognitive Computing Consortium and Babson College.

Our objective in this document is two-fold. Cognitive computing as an approach to human-machine problem solving is new and hence somewhat unfamiliar. Consequently, like any other new technology, there is a certain amount of hype and confusion that contribute to clouding its adoption. Our first objective is to briefly distinguish how AI and cognitive computing differ not only from each other but also from traditional information systems. Our second objective is to present a set of tools that will guide buyers and vendors of cognitive applications through a set of decision points.  Cognitive applications are new, and largely untested.  We hope that as you select, deploy and test these new applications, you will discuss your experience with them so that we can test the decision tools described in the downloadable document.  Please contact me at sue@synthexis.com with questions and comments. We hope that our work provides some clarity for this burgeoning field.

To Download, click here:  Understanding Cognitive Computing

Using AI in an Uncertain World

Like anything else in life except death and taxes (and even the particulars of these are uncertain), uncertainty is something that humans deal with every day.  From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty and we all have learned how to navigate an unpredictable world.  As AI becomes widely deployed, it simply adds a new dimension of unpredictability. Perhaps, however, instead of trying to stuff the genie back in the bottle, we can develop some realistic guidelines for its use.

Our expectations for AI, and for computers in general have always been unrealistic.  The fact is that software is buggy; that algorithms are crafted by humans who have certain biases about how systems and how the world works—and they may not match your biases.  Furthermore, no data set is unbiased, and we use data sets with built in biases or with holes in the data to train AI systems.  These systems are by their very nature, then, biased, or lacking in information.  If we depend on these systems to be perfect, we are letting ourselves in for errors, mistakes, and even disasters.

However, relying on biased systems is no different from asking a friend, who shares your world view, for information that may serve to bolster that view rather than balance it.  And we do this all the time.  Finding balanced, reliable, reputable information is hard and sometimes impossible.  Any person trying to navigate an uncertain world tries to make decisions based on balanced information. The import of the decision governs (or should) the effort we make in hunting for reliable but differing sources.  The speed with which a decision must be made often interferes with this effort. And we need to accept that our decisions will be imperfect, or even outright wrong, because no one can amass and interpret correctly everything there is to know.

Where might AI systems fit into the information picture?  We know that neither humans nor systems are infallible in their decision making. Adding the input of a well-crafted, well-tested system that is based on a large volume of reputable data, to human decision making can speed up and improve the outcome. There are good reasons for this because human thinking balances AI systems. They can plug each others’ blind spots.  Humans make judgments based on their world view.  They are capable of understanding priorities, ethics, values, justice, and beauty.  Machines can’t.  But machines can crunch vast volumes of data. They don’t get embarrassed.  They may find patterns we wouldn’t think to look for.  But humans can decide whether to use that information.  This makes a perfect partnership in which one of the partners won’t be insulted if their input is ignored.

Adding AI into the physical world in which snap decisions are required, raises additional design and an ethical issues which we are ill-fit to resolve today.  Self driving cars are a good example of this.  In the abstract, and at a high level, it’s been shown that most accidents and fatalities are due to human error.  So, self-driving cars may help us save lives.  Now we come down to the individual level. Suppose we have a sober, skilled, experienced driver who would recognize a danger she has never seen before. Suppose that we have a self driving car that isn’t trained on that particular hazard. Should the driver or the system be in charge?  I would opt for an AI assisted system with override from a sober, experienced driver. On the other hand, devices with embedded cognition can be a boon that changes someone’s world. One project at IBM research is developing self driving buses to assist the elderly or the disabled live their lives independently. Like Alexa or Siri on a smaller scale, this could change lives. We come back to the matter of context, use, and value. There is no single answer to human questions of “should.”

This brings us to the question of trust. Should we trust AI systems and under what circumstances? That depends on:

  • The impact of wrong or misleading information:       Poor decisions? Physical harm? Momentary annoyance?
  • The amount and reliability of the data that feeds the system
  • The goals of the system designers:  are they trying to convince you of something?  Mislead you?  Profit from your actions?
  • The quality of the question/query

Is there some way to design systems so that they become an integral part of our thinking process, including helping us develop better questions, focus our problem statements, and reveal how reliable their recommendations are? Can we design systems that are transparent? Can we design systems that help people understand the vagaries of probabilistic output? Good design is the key—within the context of the use and the user.

Adopting Cognitive Computing: A Status Report

Cognitive computing is emerging as a significant part of the next generation of computing. Because it is early days in this new generation of computing, there is still no widespread understanding of what it is and how it differs from some of its relatives: AI, internet of things, machine learning, conversational systems, bots, or NLP. We see that in both the US and in Europe that companies are very interested, but are mostly still at the experimentation and proof of concept stage. We will be tracking some of these projects as they develop their cognitive applications and roll them out more broadly. There is no question, though, that interest is high, and that the ability to augment and assist users, as well as to move from static to dynamic systems has great appeal.

I recently had the opportunity to attend a focus group, sponsored by SAS Institute, on cognitive computing adoption outside the US. Attendees came from Denmark, Japan, Finland, Serbia, Netherlands, Sweden, Switzerland, India and Ireland. They represented financial services, telecom, consumer product manufacturers, government agencies, and airline companies. Here are some gleanings from their wide-ranging discussion.

How are you using or how do you expect to use cognitive computing?

  • Automatically revise and evolve rules to expedite adaptation
  • Uncover and improve best business practices and processes
  • Detect patterns of behavior. Detect abnormalities. Identify risks

Augment human agents who can’t handle the current workload by automating the more predictable aspects of the job.

Why move to cognitive computing?

  • Handle large amounts of data with many more variables. Especially textual data.
  • Reduce need for adding manpower. People just don’t scale.
  • Stay ahead of competitors
  • Uncover surprises. (This was a side benefit to demonstration project that was originally designed to augment the human workforce)
  • Curiosity to see what benefits might derive from cognitive computing that we can’t get now
  • Get rid of silos
  • Automate predictable or repeatable work
  • Augment human work by developing digital assistants

Examples of uses:

  • Speech-to-speech product sales. Their innovation lab is experimenting with this. App will be personalized. Will use machine learning to replace hundreds of business rules and 20 predictive models. Machine learning will allow the models to evolve and will help to revise rules faster.
  • Will discover and extract patterns of best business practices to establish KPI’s worldwide from hundreds of business managers. Need to understand what practices work and why.
  • Expedite transactional processing. Moving from rules based process to teaching a system how to assess to minimize delays.
  • They are tracking and analyzing invoices using rules and econometric models. Their goal is to teach a system to automate model development and modification for tracking and analyzing invoices. Extending beyond rules and econometric models they want to add sentiment from incoming, non-English communications.
  • Recognize patterns of behavior to find anomalies and predict risk to more thoroughly assess people and goods.
  • Automate responses to customers, but on a more individual level. Part of a project to analyze customer opinions—a big data project.

In all cases, augmenting existing applications and seeking net new benefit from the use of cognitive computing systems was consistent among all participants. Completely new product development or drastic changes to business processes from cognitive computing application wasn’t seen to provide the palatable business benefit needed to embrace adoption. In all cases, however, changes and improvements to existing business practices were expected.

Challenges

This group of early adopters was proceeding with caution. They had the bruises from past new technology experiments and don’t believe the hype around AI today. In each case, it was apparent that they had support from high-level management, and that they were starting with a proof of concept, or several. We have heard this from other buyers. Several are working with more than one vendor, trying to compare dissimilar products with little in the way of best practices to guide them.

Some of the concerns that emerged were first that these systems are often a black box; that it was not clear why they were getting the recommendations that were delivered. Because business systems are traditionally databased, this ambiguity appears to be unacceptable to them for some uses today. The buyers felt that they needed the evidence behind the results. Probabilistic systems, including search engines, have long struggled with this problem. Although we know that information systems of all sorts deliver only what you ask for and not what you should have asked for, nevertheless, they are seen as precise and complete. Managing expectations is a challenge for vendors and for IT managers.

Other group members were concerned about the need for a lot of computing power. Several mentioned the challenge of developing non-English applications because most of the research has been in English-based systems. Perhaps most intriguing in terms of issues, though, were the predicted “What-if” questions: will we lose the institutional memory that originally trained the system? If so and if the system breaks down, will we be able to fix it? Centralized systems are always a problem, they said. They must be up and running 24/7. They must be reliable. That’s a challenge for any system.

Finally, they pointed to interaction design as a great unknown, especially for non-IT, non-analyst business users who need access to data stores but won’t understand the system design behind the interface. Right now there are experiments, but no accepted best practices.

It is apparent that SAS is seizing this trend toward cognitive computing. The announcement of SAS Viya ™ at this conference, along with a variety of tools for both their loyal developer and analyst base and a wider business user audience positions them nicely as both a partner with other cognitive and IOT platforms and as a potential competitor.

We will continue to track cognitive use cases and report on them. The field is evolving rapidly. Focus groups like this, and like the Cognitive Computing Consortium’s soon-to-arrive discussion forum will enable experimenters to teach each other, perhaps mitigating mistakes that might otherwise be widespread.

Chatbots, GUI’s and conversational interfaces.

 

Facebook’s Dave Feldman writes a blog on medium.com that discusses interaction design. The most recent post examines how useful chatbots are vs. GUI’s. (https://medium.com/@dfeldman/bots-conversation-is-more-than-text-1c76d153e13d) Cutting through the hype, Dave gives some examples of how adhering to one design camp or another can create a frustrating experience for a customer. When you move the online dialog into the real world of ordering a meal in a restaurant, it’s quickly apparent that neither type of design approach makes sense. In one case, the experience is cold and inhuman, even though it’s efficient. In the other, the amount of information conveyed makes it impossible to consider all the choices and come to a decision. The trick is to use each approach judiciously, and probably together, depending on the amount of information to convey, the type of dialog needed in order to make a decision, and the human element that creates a warm, satisfying customer experience. Check it out. It’s both entertaining and instructive.

Legal Issues: Can we depend on algorithms to make decisions?

By Sue Feldman

In the cognitive computing era, there are plenty of tough technical challenges. Their difficulty pales, however, when compared to the social and legal issues these new technologies raise. Increasingly, we rely on algorithms to help us sort through the complex factors that lead to making a decision. For most of us, there is no way of knowing whether the algorithm is well suited to handle our current situation. In fact, by their very nature, algorithms cannot be crafted to react dependably to the unforeseeable. Articles by Julia Angwin in the New York Times and ProPublica on Aug. 1st celebrate a decision by the Wisconsin Supreme Court to limit the influence of algorithmic recommendations for sentencing offenders. The algorithms predict the risk that an offender might commit a crime in the future. Based on these recommendations, an offender might face jail time or probation. See http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?ref=opinion&_r=0 or https://www.propublica.org/article/making-algorithms-accountable.

There is no stuffing the algorithm genie back in the virtual bottle. The fact is that we need help in making sense of the welter of data that showers us whenever we make a decision. From choosing a carpenter to treating a cancer patient, the human mind can’t take in every available data point quickly enough to make the most optimum decision in a reasonable amount of time. For the most part, that is not a problem. We don’t need to know everything to make an acceptable decision. There are plenty of good carpenters, restaurants, or books. Rarely are day-to-day decisions a matter of life or death. But sometimes they are. From self-driving cars to medical treatment, when lives are at stake, should we rely on algorithms alone?

Our society tends to rate the accuracy of computer results much more highly than that of human decisions. For some reason, we leave our skepticism behind when recommendations are digital. What has created this aura of infallibility? As a young researcher, I found that I could hand a client the same information in the same words as a digital record and as a photocopy and have the digital version more readily accepted. The believability bias hasn’t changed much since then. It’s time to develop a more mature approach to melding digital evaluations with human common sense. We need to ensure that the path to digital recommendations is transparent and that the underlying data is reliable so that we can judge the conclusions for ourselves. We also need to teach skepticism.

Computers and humans complement each other. Neither is perfect. Combined, human sense making and algorithmic pattern detection make for more complete (but still imperfect) understanding. Angwin says we must require “the right to examine and challenge the data used to make algorithmic decisions about us.”   That’s a good first step.

Building a Cognitive Business

When IBM’s Watson burst upon the scene in 2011, little did we know that it would kick off a new category of computing. Since that time, IBM has drawn most of its major divisions into the cognitive fold. That’s no surprise: cognitive computing is the ultimate Venn diagram, drawing on hundreds of technologies, from AI to Zookeeper, in order to create systems that “interact, understand, reason, and learn.” It was apparent at the Watson Analyst Day on May 23rd that IBM’s message has been refined, and that it has begun to gel. Just as we in the Cognitive Computing Consortium have moved from a vague understanding that we had something fundamentally new, so too has IBM’s understanding of what cognitive computing is, and what it’s good for become much more solid.

Realizing that the complexity of cognitive solutions can be a barrier to entry, IBM Watson has begun to offer “App Starter Kits” around clusters of technologies that are pre-integrated, like conversation agents, business intelligence, or audio analysis.  But markets require more than a single vendor, and we have already seen the rise of new vendors that are not part of the Watson Partner constellation. Being able to mix and match platforms, apps, and technologies will require new standards for not just formats but also storage and terminology if all types of data are to be exchanged easily. Making Watson’s cloud-based cognitive services like sentiment extraction, NLP, predictive analytics, or speech-to-text on both Bluemix and Twilio is a good step in this direction. So are the emerging sets of tools to guide adopters through data selection and modeling, analytics selection, visualization choices, and interaction design.

Two years ago, IBM launched its Watson Division. It now has 550 partners in 45 countries, thousands of developers, and programs in conjunction with 240 universities. It continues to add new languages and services. This is the beginning of a market, but we believe that this phenomenon is bigger than a single technology market. Rather, IT will evolve from the current deterministic computing era to one that is more nuanced. We already see elements of cognitive computing creeping into new versions of older applications—more intelligent interactions, better, more contextual recommendations, In this new world, we will add probabilistic approaches, AI, predictive analytics, learning systems, etc., but we will also retain what works from the old. That calls for a much deeper understanding of which technologies solve what problems the most effectively. What kinds of problems demand a cognitive computing approach? The processes that IBM delineated as possible elements of a cognitive solution are:

  1. Converse/interact
  2. Explore
  3. Analyze
  4. Personalize
  5. Diagnose/recommend

They also emphasized the importance of data—curated, annotated data that is normalized in some way using ontologies for both categorization and reasoning. This should come as no surprise to those of us from the online industry, who know that there is no substitute for the blood, sweat and tears that go into building a credible, usable collection of information. The question today is how to do this at scale, and at least semi-automatically, using NLP, categorizers, clustering engines, and learning systems, training sets, and whatever other tools we can throw at this barrier to sense making.

By far, the biggest advances in cognitive applications have been made in healthcare. With good reason. Medicine has a long history of information curation. Advances in ontology building, controlled vocabularies (normalization) and categorization date back to the 1950’s. PubMed and its predecessors had already built multilingual online collections of medical publications, clinical data, toxicology, and treatment guidelines as early as the 1980’s. These resources predate IBM Watson health and have enabled it to address health information problems with an existing well-curated knowledge base. Healthcare requires extreme accuracy, big data analytics, advanced patient-doctor-machine natural interaction, and a probabilistic approach to solving a medical problem. Because the amount of possibly relevant information is staggering, and the outcome is a matter of life and death, the reasons for investment in cognitive systems are obvious for healthcare insurers and providers alike. There are also, of course, billions of healthcare dollars at stake. Customer engagement, retail sales, mergers and acquisitions, investment banking, security and intelligence are not far behind in their promise, but they lack that initial bootstrapping of existing knowledge bases.

In summary, cognitive computing is moving from dream to reality. New tools and more packaged applications have reduced the complexity and the time to deploy. Early adopters are still at the experimentation stage, but from IBM and other vendors and services firms, we see gradual adoption with associated ROI, a virtuous loop that attracts yet more buying interest.

2016: the tipping point for 3rd platform, says IDC

IDC’s Third Platform –the next computing generation—rests on cloud computing, big data and analytics, social business, and mobility. Together, these form a foundation to provide scalable anywhere-anytime-any device computing. As these trends become ubiquitous, they enable and accelerate the Internet of Things (IOT), cognitive systems, robotics, 3D printing, virtual reality, self-driving cars, or better security.

At the same time, this brave new world wreaks havoc on the old one of PC’s, client-server software and legacy apps. I would also add another disruptive ingredient to the mix—open source software, which is no longer for hobbyists and is now embedded in most new applications. IDC predicts that 2016 is the year in which spending on third platform IT will exceed that for the second platform, with a CAGR of 12.7% for 2015-2020. At the same time, they predict that second platform investment will be down 5%.

Their recent surveys show that, in terms of maturity, today most companies are in the digital exploration or early platform development phase, with 14% having no interest in digital transformation, and only 8% already using digital transformation to disrupt competitors or markets. That will change by 2020 as 50% of businesses will be using this platform to disrupt and transform.

Other predictions:

  • Business owners, not IT will control more of the IT budget
  • Health services and financial services are two of the top industries to invest, reaping the rewards of faster, cheaper, and more comprehensive uses of their data.
  • Other top applications now in the works include marketing and sales, retail, security, education, media and entertainment.
  • Technology will be embedded in most applications and devices.
  • Start-ups are rife, and the shakeup has not yet begun
  • Cognitive computing and AI is a requirement for developer teams—by 2018, more than 50% of developer teams will be using AI for continuous sensing and collective learning (cognitive applications and IOT).

Where does existing IT infrastructure play in this game? In our scramble as analysts to pin down trends, we often neglect the fact that existing systems and applications a still valuable. They may well be good enough for a given task or process, or they may continue to churn on, feeding into newer layers of technology stacks when appropriate. Unlike newer versions, the kinks have been worked out. The challenge for business and IT managers will be to distinguish between the promise of the new and the security of the old: when to invest, when to explore, and when to stand back and watch. Good question!

Click here or more information on IDC’s take on the 3rd Platform