Category Archives: Usability and HCI
Facebook’s Dave Feldman writes a blog on medium.com that discusses interaction design. The most recent post examines how useful chatbots are vs. GUI’s. (https://medium.com/@dfeldman/bots-conversation-is-more-than-text-1c76d153e13d) Cutting through the hype, Dave gives some examples of how adhering to one design camp or another can create a frustrating experience for a customer. When you move the online dialog into the real world of ordering a meal in a restaurant, it’s quickly apparent that neither type of design approach makes sense. In one case, the experience is cold and inhuman, even though it’s efficient. In the other, the amount of information conveyed makes it impossible to consider all the choices and come to a decision. The trick is to use each approach judiciously, and probably together, depending on the amount of information to convey, the type of dialog needed in order to make a decision, and the human element that creates a warm, satisfying customer experience. Check it out. It’s both entertaining and instructive.
This week, a research group asked me, “For cognitive computing to take off, what advances do you think we need to make the in next five years?” I answered the question, first listing the major components of a cognitive system, and then discussing which ones were still fairly primitive. But the question continues to haunt me. The fact is that we’ve had most of the components for cognitive computing for a very long time. Language understanding, machine learning, categorization, voting algorithms, search, databases, reporting and visualization tools, genetic algorithms, inferencing, analytics, modeling, statistics, speech recognition, voice recognition, haptic interfaces, etc., etc. I was writing about all of these in the 1990’s. As hardware capacity and architectures have advanced, and our understanding of how to use these tools has evolved, we have finally been able to put all these pieces together. But the fact is, that we have had them for decades.
Here’s what we don’t have: an understanding of how people and systems can interact with each other comfortably. We need to understand and predict the process by which people interact to question, remove ambiguity, discuss and decide. Then we need to translate that process into human-computer terms. Even more, we need a change in attitude among developers and users. Today, we tend to think about the applications we develop in a vacuum. The human initiates a process and then stands back. The machine takes the query, the problem statement, and processes it, spitting out the answer at the end. Users, because of their expectations that machines will not be information partners, helping the information problem to evolve and then finally be resolved.
That’s not the way a human information interaction happens. If two people exchange information, they first negotiate what it is they are going to discuss. They remove ambiguity and define scope. They refine, expand or digress. This process certainly answers questions, but it does more: it builds trust and relationships, and it explores an information space rather than confining itself to the original question. That’s what we need to improve human-computer interactions: first, help in understanding the question. Then, we need better design to enable that question to evolve over time as we add more information, resolve some pieces and confront more puzzles.