Big Data and Cognitive Computing: The Next Industrial Revolution?

Big Data and Cognitive Computing:  The Next Industrial Revolution? updates the trends we covered in The Answer Machine, published by Morgan & Claypool last year.  This webcast on Jan. 30, 2014 was given to the Cornell Entrepreneur Network, but was open to all.  You can listen to the recording at

In updating the book, we found that the nascent trends we discussed in 2012 have quickly exploded.  Applications that aggregate information and integrate technologies are becoming common.  Task-centered design is almost a requirement.  The market, driven by the buzz around big data, and bombarded by information has started to demand what vendors foresaw:  there’s immense value in putting together the pieces from disparate sources, and we need help in doing this. IBM’s Watson may have been the first to define cognitive computing, but we see others positioning themselves in this marketplace as the interest grows.  We’ll be covering some of these new companies in the months ahead.

During the past year, as we work with vendors and technology buyers, we have found that one of the most difficult concepts to get across is probabilistic computing.  Where does it fit in the current IT landscape?  Does it replace traditional BI?  We expect to explore this topic also in the coming months.  Please contact me directly if you’d like to discuss it in depth, or to schedule a briefing for your company.  I can be reached at

Emu: Context and design (oh! and also nice technology)

Breakthroughs in technology are sometimes less about the underlying technology than they are a leap in understanding how people need to use technology.  The iPod and its ecosystem, for instance, create a synergy between a handy gadget and the music and content people want to carry around and listen to.  By understanding that people want to download, listen to, and share music easily without having to shift from one application to another, the iPod and its successor devices changed our use of content and upended whole industries.  Bo Begole’s book,  “Ubiquitous Computing for Business”, emphasizes that designing an application with the task—the context—as a starting point trumps starting with a technology if you want people to adopt that software.  Business applications have lagged behind consumer applications in ease of use, but sooner or later, what we learn in the consumer space infuses new business interaction designs.

Today’s launch of Emu is another consumer breakthrough that will have broad implications in the business arena as well.  Consider the awkwardness of organizing an evening with friends.  Email or texting to find out if they are available.  Having everyone check one or more calendars.  Merging the answers.  Then agreeing on time and place.  Then seeing if restaurants are available and if that time will fit with the movie schedule.  A familiar and time consuming process.  What emu does is technically difficult but, on the surface, simple and obvious.  It lets you stay in SMS as you arrange a date, time and location.  It checks calendars, available times for favorite restaurants, movie times.  Then it makes reservations, and even shows where you all are on a map as you start to converge on the location.  

Yes, I know the founders of emu, but beyond that, I am taken with this application because it fits squarely into the trend of simple, usable applications that save time and hide technical complexity.  Check it out at

How does open source fit into the enterprise?

The open source software movement raises difficult questions for CIO’s: 

  • Is open source software “free”?
  • If not, what are its costs and risks?
  • Does using open source software save time in deploying an application?
  • What uses are best suited to open source software?

The answer to all of these questions is, unfortunately, “it depends”.  Using open source software effectively depends on the type of application and on the expertise of the developers. It also requires the same kinds of trade-offs that are necessitated by any choice of software:  how customized does it have to be?  How accurate?  How scalable?  How usable and for which types of users?  This is particularly true in the realm of search and text analytics because both of these applications are language dependent, with all the nuances, variety and complexity that language brings.  

We find widespread use of open source components by commercial software vendors.  They use open source search or text analytics as a starting point.  Then they add in the vocabularies, domain knowledge, tools and widgets, connectors to other applications and information stores, process knowledge and user interaction design to create usable and scalable applications that are suited to a specific purpose.  We also find sophisticated enterprises with enough skilled developers, computational linguists, and interaction designers using open source software to give them the custom applications they need. There is no doubt that as open source applications have become more robust and the tools to use them have become available that they are an attractive alternative for many enterprises.  But are they “free?”  Not if you consider the time, labor and expertise needed to make them an integral, useful part of the enterprise stack. 

I’ll be chairing a one-day program on open source search software on Nov. 6th in Chantilly, VA, near Washington, DC that will discuss these questions.  We’ve invited some major open source search developers from Elastic Search, Sphinx, Lucene, Solr, as well as vendors who have embedded open source software in their products. Practitioners will discuss their experience with developing applications using open source as well.  Eric Brown, Director of Research for IBM Watson, which embeds multiple open source products, will give the keynote, and Donna Harman from NIST’s TREC will discuss how to evaluate search effectiveness.  Government employees can register for the event free.  Others will get a discount on the registration fee by entering feldman2013 when they register. 

In addition, we are collecting data on use of both commercial and open source search and text analytics and are hoping that you will fill in our survey.  Results will be tabulated, and all respondents will receive a summary of what we find.  You can find the survey at:

I hope to see you in November.

About this Blog

Information interaction, visualization, and technology integration are entering a new phase of tight harmony. The next big leaps in computing will make the engagement between technology, people, and information increasingly natural and relevant.

The Synthexis blog will cover the research, the software, and the best innovative thinking about how to deliver experiences that are intuitive and task appropriate. We will report on new research from the corporate R&D Lab to the venture-backed startup and incubator. Although our focus will be on technology advances in search and analytics of unstructured data, text, voice, and video, we are most interested in the synergy among these technologies and their counterparts in big data, business intelligence, or storage. What are the trends that will drive the market for and development of the next era in computing? How will new understanding of user interaction and user needs shape this era?

The Synthexis Blog will also highlight the role that upheavals in social attitudes are playing in the marketplace, as technology touches more and more areas of ethics, privacy, social interaction, and the meaning of individuality.

As author William Gibson noted in the Economist in 2003: “The future is already here, it’s just not evenly distributed.” The Synthexis Blog will help identify the changing dynamics of that distribution and highlight developments of uncommon value in the human-machine partnership.

Changing The Computing Landscape (PDF Download)

Changing The Computing Landscape

Click Here to view the presentation.