Analytics

Adopting Cognitive Computing: A Status Report

Cognitive computing is emerging as a significant part of the next generation of computing. Because it is early days in this new generation of computing, there is still no widespread understanding of what it is and how it differs from some of its relatives: AI, internet of things, machine learning, conversational systems, bots, or NLP. We see that in both the US and in Europe that companies are very interested, but are mostly still at the experimentation and proof of concept stage. We will be tracking some of these projects as they develop their cognitive applications and roll them out more broadly. There is no question, though, that interest is high, and that the ability to augment and assist users, as well as to move from static to dynamic systems has great appeal.

I recently had the opportunity to attend a focus group, sponsored by SAS Institute, on cognitive computing adoption outside the US. Attendees came from Denmark, Japan, Finland, Serbia, Netherlands, Sweden, Switzerland, India and Ireland. They represented financial services, telecom, consumer product manufacturers, government agencies, and airline companies. Here are some gleanings from their wide-ranging discussion.

How are you using or how do you expect to use cognitive computing?

  • Automatically revise and evolve rules to expedite adaptation
  • Uncover and improve best business practices and processes
  • Detect patterns of behavior. Detect abnormalities. Identify risks

Augment human agents who can’t handle the current workload by automating the more predictable aspects of the job.

Why move to cognitive computing?

  • Handle large amounts of data with many more variables. Especially textual data.
  • Reduce need for adding manpower. People just don’t scale.
  • Stay ahead of competitors
  • Uncover surprises. (This was a side benefit to demonstration project that was originally designed to augment the human workforce)
  • Curiosity to see what benefits might derive from cognitive computing that we can’t get now
  • Get rid of silos
  • Automate predictable or repeatable work
  • Augment human work by developing digital assistants

Examples of uses:

  • Speech-to-speech product sales. Their innovation lab is experimenting with this. App will be personalized. Will use machine learning to replace hundreds of business rules and 20 predictive models. Machine learning will allow the models to evolve and will help to revise rules faster.
  • Will discover and extract patterns of best business practices to establish KPI’s worldwide from hundreds of business managers. Need to understand what practices work and why.
  • Expedite transactional processing. Moving from rules based process to teaching a system how to assess to minimize delays.
  • They are tracking and analyzing invoices using rules and econometric models. Their goal is to teach a system to automate model development and modification for tracking and analyzing invoices. Extending beyond rules and econometric models they want to add sentiment from incoming, non-English communications.
  • Recognize patterns of behavior to find anomalies and predict risk to more thoroughly assess people and goods.
  • Automate responses to customers, but on a more individual level. Part of a project to analyze customer opinions—a big data project.

In all cases, augmenting existing applications and seeking net new benefit from the use of cognitive computing systems was consistent among all participants. Completely new product development or drastic changes to business processes from cognitive computing application wasn’t seen to provide the palatable business benefit needed to embrace adoption. In all cases, however, changes and improvements to existing business practices were expected.

Challenges

This group of early adopters was proceeding with caution. They had the bruises from past new technology experiments and don’t believe the hype around AI today. In each case, it was apparent that they had support from high-level management, and that they were starting with a proof of concept, or several. We have heard this from other buyers. Several are working with more than one vendor, trying to compare dissimilar products with little in the way of best practices to guide them.

Some of the concerns that emerged were first that these systems are often a black box; that it was not clear why they were getting the recommendations that were delivered. Because business systems are traditionally databased, this ambiguity appears to be unacceptable to them for some uses today. The buyers felt that they needed the evidence behind the results. Probabilistic systems, including search engines, have long struggled with this problem. Although we know that information systems of all sorts deliver only what you ask for and not what you should have asked for, nevertheless, they are seen as precise and complete. Managing expectations is a challenge for vendors and for IT managers.

Other group members were concerned about the need for a lot of computing power. Several mentioned the challenge of developing non-English applications because most of the research has been in English-based systems. Perhaps most intriguing in terms of issues, though, were the predicted “What-if” questions: will we lose the institutional memory that originally trained the system? If so and if the system breaks down, will we be able to fix it? Centralized systems are always a problem, they said. They must be up and running 24/7. They must be reliable. That’s a challenge for any system.

Finally, they pointed to interaction design as a great unknown, especially for non-IT, non-analyst business users who need access to data stores but won’t understand the system design behind the interface. Right now there are experiments, but no accepted best practices.

It is apparent that SAS is seizing this trend toward cognitive computing. The announcement of SAS Viya ™ at this conference, along with a variety of tools for both their loyal developer and analyst base and a wider business user audience positions them nicely as both a partner with other cognitive and IOT platforms and as a potential competitor.

We will continue to track cognitive use cases and report on them. The field is evolving rapidly. Focus groups like this, and like the Cognitive Computing Consortium’s soon-to-arrive discussion forum will enable experimenters to teach each other, perhaps mitigating mistakes that might otherwise be widespread.

2016: the tipping point for 3rd platform, says IDC

IDC’s Third Platform –the next computing generation—rests on cloud computing, big data and analytics, social business, and mobility. Together, these form a foundation to provide scalable anywhere-anytime-any device computing. As these trends become ubiquitous, they enable and accelerate the Internet of Things (IOT), cognitive systems, robotics, 3D printing, virtual reality, self-driving cars, or better security.

At the same time, this brave new world wreaks havoc on the old one of PC’s, client-server software and legacy apps. I would also add another disruptive ingredient to the mix—open source software, which is no longer for hobbyists and is now embedded in most new applications. IDC predicts that 2016 is the year in which spending on third platform IT will exceed that for the second platform, with a CAGR of 12.7% for 2015-2020. At the same time, they predict that second platform investment will be down 5%.

Their recent surveys show that, in terms of maturity, today most companies are in the digital exploration or early platform development phase, with 14% having no interest in digital transformation, and only 8% already using digital transformation to disrupt competitors or markets. That will change by 2020 as 50% of businesses will be using this platform to disrupt and transform.

Other predictions:

  • Business owners, not IT will control more of the IT budget
  • Health services and financial services are two of the top industries to invest, reaping the rewards of faster, cheaper, and more comprehensive uses of their data.
  • Other top applications now in the works include marketing and sales, retail, security, education, media and entertainment.
  • Technology will be embedded in most applications and devices.
  • Start-ups are rife, and the shakeup has not yet begun
  • Cognitive computing and AI is a requirement for developer teams—by 2018, more than 50% of developer teams will be using AI for continuous sensing and collective learning (cognitive applications and IOT).

Where does existing IT infrastructure play in this game? In our scramble as analysts to pin down trends, we often neglect the fact that existing systems and applications a still valuable. They may well be good enough for a given task or process, or they may continue to churn on, feeding into newer layers of technology stacks when appropriate. Unlike newer versions, the kinks have been worked out. The challenge for business and IT managers will be to distinguish between the promise of the new and the security of the old: when to invest, when to explore, and when to stand back and watch. Good question!

Click here or more information on IDC’s take on the 3rd Platform

Asking the right question

This week, a research group asked me, “For cognitive computing to take off, what advances do you think we need to make the in next five years?” I answered the question, first listing the major components of a cognitive system, and then discussing which ones were still fairly primitive. But the question continues to haunt me. The fact is that we’ve had most of the components for cognitive computing for a very long time. Language understanding, machine learning, categorization, voting algorithms, search, databases, reporting and visualization tools, genetic algorithms, inferencing, analytics, modeling, statistics, speech recognition, voice recognition, haptic interfaces, etc., etc. I was writing about all of these in the 1990’s. As hardware capacity and architectures have advanced, and our understanding of how to use these tools has evolved, we have finally been able to put all these pieces together. But the fact is, that we have had them for decades.

Here’s what we don’t have: an understanding of how people and systems can interact with each other comfortably. We need to understand and predict the process by which people interact to question, remove ambiguity, discuss and decide. Then we need to translate that process into human-computer terms. Even more, we need a change in attitude among developers and users. Today, we tend to think about the applications we develop in a vacuum. The human initiates a process and then stands back. The machine takes the query, the problem statement, and processes it, spitting out the answer at the end. Users, because of their expectations that machines will not be information partners, helping the information problem to evolve and then finally be resolved.

That’s not the way a human information interaction happens. If two people exchange information, they first negotiate what it is they are going to discuss. They remove ambiguity and define scope. They refine, expand or digress. This process certainly answers questions, but it does more: it builds trust and relationships, and it explores an information space rather than confining itself to the original question. That’s what we need to improve human-computer interactions: first, help in understanding the question. Then, we need better design to enable that question to evolve over time as we add more information, resolve some pieces and confront more puzzles.

Big Data: Beyond the Hype

Information has always been central to the functioning of an enterprise. Today, with the fast pace of business, access to the right information at the right time is critical. Enterprises need information to track the status of the organization; to answer questions; to alert it to changes, emergencies, trends, opportunities or risks; to predict, model and forecast their business.

To this, we must add one more information goal, one that is so valuable but so elusive that it has been little more than a dream: to find the unexpected: the unknown threat, the unknown opportunity. These so-called black swans lurk on the edge of our understanding, obscured by the over-abundance and scattered nature of information in the organization today.

Big data tools and technologies have been developed to help manage, access, analyze and use vast quantities of information. Big data is often defined by the three V’s: Volume (amount of data); Velocity (the speed at which it arrives); and Variety ( the number of data types or formats). But the value in big data is not really rooted in its abundance, but rather in how it is used. Big data tools enable us to understand trends and answer questions with a degree of certainty that was not possible before—because we did not have enough data to support our findings. Big data approaches to healthcare are starting to enable treatments that take into consideration the particular characteristics of a patient– their age, history, or genetic makeup. We use these characteristics as a filter or lens on the medical research literature, focusing what we know within the context of that patient. Given enough data, we can also find unexpected patterns. For instance, one project uncovered previously unknown markers for predicting hospital readmissions for congestive heart failure, saving a health organization millions of dollars. Big data techniques have helped predict the next holiday retail season, uncovered patterns of insurance fraud, and emerging trends in the stock market. We use these tools to find out if customers are satisfied with our products, and if not, why not. Political campaigns use them and so do managers of baseball teams.

Briefly, then, big data gives us plenty of data to analyze overall trends and demands, but it also helps us understand individuals within the context of a solid set of information about people who are like them. Instead of aiming at a mythical “average”, it lets us treat customers, patients and voters as individuals.

With new technologies like big data, we are at the beginning of a very complex new relationship between man and machine. Machines can find patterns and make recommendations; but people need to test these patterns for reality, and they also need to be able to hypothesize and test results. Used wisely, big data could improve customer service, healthcare, or government by allowing us to dig more deeply. Used wisely, these tools will also help us to make our organizations more flexible and adaptable in a fast-changing world.