conferences

Building a Cognitive Business

When IBM’s Watson burst upon the scene in 2011, little did we know that it would kick off a new category of computing. Since that time, IBM has drawn most of its major divisions into the cognitive fold. That’s no surprise: cognitive computing is the ultimate Venn diagram, drawing on hundreds of technologies, from AI to Zookeeper, in order to create systems that “interact, understand, reason, and learn.” It was apparent at the Watson Analyst Day on May 23rd that IBM’s message has been refined, and that it has begun to gel. Just as we in the Cognitive Computing Consortium have moved from a vague understanding that we had something fundamentally new, so too has IBM’s understanding of what cognitive computing is, and what it’s good for become much more solid.

Realizing that the complexity of cognitive solutions can be a barrier to entry, IBM Watson has begun to offer “App Starter Kits” around clusters of technologies that are pre-integrated, like conversation agents, business intelligence, or audio analysis.  But markets require more than a single vendor, and we have already seen the rise of new vendors that are not part of the Watson Partner constellation. Being able to mix and match platforms, apps, and technologies will require new standards for not just formats but also storage and terminology if all types of data are to be exchanged easily. Making Watson’s cloud-based cognitive services like sentiment extraction, NLP, predictive analytics, or speech-to-text on both Bluemix and Twilio is a good step in this direction. So are the emerging sets of tools to guide adopters through data selection and modeling, analytics selection, visualization choices, and interaction design.

Two years ago, IBM launched its Watson Division. It now has 550 partners in 45 countries, thousands of developers, and programs in conjunction with 240 universities. It continues to add new languages and services. This is the beginning of a market, but we believe that this phenomenon is bigger than a single technology market. Rather, IT will evolve from the current deterministic computing era to one that is more nuanced. We already see elements of cognitive computing creeping into new versions of older applications—more intelligent interactions, better, more contextual recommendations, In this new world, we will add probabilistic approaches, AI, predictive analytics, learning systems, etc., but we will also retain what works from the old. That calls for a much deeper understanding of which technologies solve what problems the most effectively. What kinds of problems demand a cognitive computing approach? The processes that IBM delineated as possible elements of a cognitive solution are:

  1. Converse/interact
  2. Explore
  3. Analyze
  4. Personalize
  5. Diagnose/recommend

They also emphasized the importance of data—curated, annotated data that is normalized in some way using ontologies for both categorization and reasoning. This should come as no surprise to those of us from the online industry, who know that there is no substitute for the blood, sweat and tears that go into building a credible, usable collection of information. The question today is how to do this at scale, and at least semi-automatically, using NLP, categorizers, clustering engines, and learning systems, training sets, and whatever other tools we can throw at this barrier to sense making.

By far, the biggest advances in cognitive applications have been made in healthcare. With good reason. Medicine has a long history of information curation. Advances in ontology building, controlled vocabularies (normalization) and categorization date back to the 1950’s. PubMed and its predecessors had already built multilingual online collections of medical publications, clinical data, toxicology, and treatment guidelines as early as the 1980’s. These resources predate IBM Watson health and have enabled it to address health information problems with an existing well-curated knowledge base. Healthcare requires extreme accuracy, big data analytics, advanced patient-doctor-machine natural interaction, and a probabilistic approach to solving a medical problem. Because the amount of possibly relevant information is staggering, and the outcome is a matter of life and death, the reasons for investment in cognitive systems are obvious for healthcare insurers and providers alike. There are also, of course, billions of healthcare dollars at stake. Customer engagement, retail sales, mergers and acquisitions, investment banking, security and intelligence are not far behind in their promise, but they lack that initial bootstrapping of existing knowledge bases.

In summary, cognitive computing is moving from dream to reality. New tools and more packaged applications have reduced the complexity and the time to deploy. Early adopters are still at the experimentation stage, but from IBM and other vendors and services firms, we see gradual adoption with associated ROI, a virtuous loop that attracts yet more buying interest.

2016: the tipping point for 3rd platform, says IDC

IDC’s Third Platform –the next computing generation—rests on cloud computing, big data and analytics, social business, and mobility. Together, these form a foundation to provide scalable anywhere-anytime-any device computing. As these trends become ubiquitous, they enable and accelerate the Internet of Things (IOT), cognitive systems, robotics, 3D printing, virtual reality, self-driving cars, or better security.

At the same time, this brave new world wreaks havoc on the old one of PC’s, client-server software and legacy apps. I would also add another disruptive ingredient to the mix—open source software, which is no longer for hobbyists and is now embedded in most new applications. IDC predicts that 2016 is the year in which spending on third platform IT will exceed that for the second platform, with a CAGR of 12.7% for 2015-2020. At the same time, they predict that second platform investment will be down 5%.

Their recent surveys show that, in terms of maturity, today most companies are in the digital exploration or early platform development phase, with 14% having no interest in digital transformation, and only 8% already using digital transformation to disrupt competitors or markets. That will change by 2020 as 50% of businesses will be using this platform to disrupt and transform.

Other predictions:

  • Business owners, not IT will control more of the IT budget
  • Health services and financial services are two of the top industries to invest, reaping the rewards of faster, cheaper, and more comprehensive uses of their data.
  • Other top applications now in the works include marketing and sales, retail, security, education, media and entertainment.
  • Technology will be embedded in most applications and devices.
  • Start-ups are rife, and the shakeup has not yet begun
  • Cognitive computing and AI is a requirement for developer teams—by 2018, more than 50% of developer teams will be using AI for continuous sensing and collective learning (cognitive applications and IOT).

Where does existing IT infrastructure play in this game? In our scramble as analysts to pin down trends, we often neglect the fact that existing systems and applications a still valuable. They may well be good enough for a given task or process, or they may continue to churn on, feeding into newer layers of technology stacks when appropriate. Unlike newer versions, the kinks have been worked out. The challenge for business and IT managers will be to distinguish between the promise of the new and the security of the old: when to invest, when to explore, and when to stand back and watch. Good question!

Click here or more information on IDC’s take on the 3rd Platform

How does open source fit into the enterprise?

The open source software movement raises difficult questions for CIO’s: 

  • Is open source software “free”?
  • If not, what are its costs and risks?
  • Does using open source software save time in deploying an application?
  • What uses are best suited to open source software?

The answer to all of these questions is, unfortunately, “it depends”.  Using open source software effectively depends on the type of application and on the expertise of the developers. It also requires the same kinds of trade-offs that are necessitated by any choice of software:  how customized does it have to be?  How accurate?  How scalable?  How usable and for which types of users?  This is particularly true in the realm of search and text analytics because both of these applications are language dependent, with all the nuances, variety and complexity that language brings.  

We find widespread use of open source components by commercial software vendors.  They use open source search or text analytics as a starting point.  Then they add in the vocabularies, domain knowledge, tools and widgets, connectors to other applications and information stores, process knowledge and user interaction design to create usable and scalable applications that are suited to a specific purpose.  We also find sophisticated enterprises with enough skilled developers, computational linguists, and interaction designers using open source software to give them the custom applications they need. There is no doubt that as open source applications have become more robust and the tools to use them have become available that they are an attractive alternative for many enterprises.  But are they “free?”  Not if you consider the time, labor and expertise needed to make them an integral, useful part of the enterprise stack. 

I’ll be chairing a one-day program on open source search software on Nov. 6th in Chantilly, VA, near Washington, DC that will discuss these questions.  We’ve invited some major open source search developers from Elastic Search, Sphinx, Lucene, Solr, as well as vendors who have embedded open source software in their products. Practitioners will discuss their experience with developing applications using open source as well.  Eric Brown, Director of Research for IBM Watson, which embeds multiple open source products, will give the keynote, and Donna Harman from NIST’s TREC will discuss how to evaluate search effectiveness.  Government employees can register for the event free.  Others will get a discount on the registration fee by entering feldman2013 when they register. 

In addition, we are collecting data on use of both commercial and open source search and text analytics and are hoping that you will fill in our survey.  Results will be tabulated, and all respondents will receive a summary of what we find.  You can find the survey at: https://www.surveymonkey.com/s/Synthexis

I hope to see you in November.