Machine learning

Using AI in an Uncertain World

Like anything else in life except death and taxes (and even the particulars of these are uncertain), uncertainty is something that humans deal with every day.  From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty and we all have learned how to navigate an unpredictable world.  As AI becomes widely deployed, it simply adds a new dimension of unpredictability. Perhaps, however, instead of trying to stuff the genie back in the bottle, we can develop some realistic guidelines for its use.

Our expectations for AI, and for computers in general have always been unrealistic.  The fact is that software is buggy; that algorithms are crafted by humans who have certain biases about how systems and how the world works—and they may not match your biases.  Furthermore, no data set is unbiased, and we use data sets with built in biases or with holes in the data to train AI systems.  These systems are by their very nature, then, biased, or lacking in information.  If we depend on these systems to be perfect, we are letting ourselves in for errors, mistakes, and even disasters.

However, relying on biased systems is no different from asking a friend, who shares your world view, for information that may serve to bolster that view rather than balance it.  And we do this all the time.  Finding balanced, reliable, reputable information is hard and sometimes impossible.  Any person trying to navigate an uncertain world tries to make decisions based on balanced information. The import of the decision governs (or should) the effort we make in hunting for reliable but differing sources.  The speed with which a decision must be made often interferes with this effort. And we need to accept that our decisions will be imperfect, or even outright wrong, because no one can amass and interpret correctly everything there is to know.

Where might AI systems fit into the information picture?  We know that neither humans nor systems are infallible in their decision making. Adding the input of a well-crafted, well-tested system that is based on a large volume of reputable data, to human decision making can speed up and improve the outcome. There are good reasons for this because human thinking balances AI systems. They can plug each others’ blind spots.  Humans make judgments based on their world view.  They are capable of understanding priorities, ethics, values, justice, and beauty.  Machines can’t.  But machines can crunch vast volumes of data. They don’t get embarrassed.  They may find patterns we wouldn’t think to look for.  But humans can decide whether to use that information.  This makes a perfect partnership in which one of the partners won’t be insulted if their input is ignored.

Adding AI into the physical world in which snap decisions are required, raises additional design and an ethical issues which we are ill-fit to resolve today.  Self driving cars are a good example of this.  In the abstract, and at a high level, it’s been shown that most accidents and fatalities are due to human error.  So, self-driving cars may help us save lives.  Now we come down to the individual level. Suppose we have a sober, skilled, experienced driver who would recognize a danger she has never seen before. Suppose that we have a self driving car that isn’t trained on that particular hazard. Should the driver or the system be in charge?  I would opt for an AI assisted system with override from a sober, experienced driver. On the other hand, devices with embedded cognition can be a boon that changes someone’s world. One project at IBM research is developing self driving buses to assist the elderly or the disabled live their lives independently. Like Alexa or Siri on a smaller scale, this could change lives. We come back to the matter of context, use, and value. There is no single answer to human questions of “should.”

This brings us to the question of trust. Should we trust AI systems and under what circumstances? That depends on:

  • The impact of wrong or misleading information:       Poor decisions? Physical harm? Momentary annoyance?
  • The amount and reliability of the data that feeds the system
  • The goals of the system designers:  are they trying to convince you of something?  Mislead you?  Profit from your actions?
  • The quality of the question/query

Is there some way to design systems so that they become an integral part of our thinking process, including helping us develop better questions, focus our problem statements, and reveal how reliable their recommendations are? Can we design systems that are transparent? Can we design systems that help people understand the vagaries of probabilistic output? Good design is the key—within the context of the use and the user.

IBM’s Watson Expands its Toolbox: Acquires AlchemyAPI

With its acquisition last week of AlchemyAPI, IBM’s Watson Group added new, tools and expertise to its already-rich and growing array. Alchemy API’s technology complements and expands the core IBM Watson features. It collects and organizes information with little preparation, making it a quick on-ramp for building a collection of information that is sorted and searchable. It works across subject domains, and doesn’t require the domain expertise that the original Watson required. Its unsupervised deep learning architecture is designed to extract order from large collections of information, including text and images, across domains.

In contrast, the original Watson tools used to understand, organize and analyze information demands some subject expertise. For best results, experts are required to build ontologies and rules for extracting facts, relationships and entities from text. The result is a mind-boggling capability to hypothesize, answer questions, and find relationships, but it takes time to build and is specific to a particular domain. That is both good and bad, because they provide a depth of understanding, but at a significant cost in terms of time to get up and running. The Watson tools are also text-centered, although significant strides have been made to add structured information as well as images and other forms of rich media.

AlchemyAPI was designed to solve precisely these problems. It creates a graph of entities – and the relationships among them, with no prior expectations for how this graph will be structured. It is entirely dependent on what information is in the collection. Again, this is both good and bad. Without subject expertise, topics that are not strongly represented in the collection may be missing or get short shrift. Both approaches have their limits, as well as their advantages. Experts add a level of topic understanding—of expectations—of what might be required to round out a topic. Machines don’t. But machines often uncover relationships, causes and effects, or correlations that humans might not expect. Finding surprises is one of the strongest arguments for investing in big data and cognitive computing.

In this acquisition, Watson continues the path that helped it win Jeopardy!—by combining every possible tool and approach that might increase understanding. IBM can now incorporate multiple categorizers, multiple schemas, multiple sources, and multiple views and then compare the results by the strength of their evidence. This gives us more varied and rich results since each technology contributes something new and crucial. Like the best human analysts, the system collects evidence, sorts through it, weighs it, and comes to more nuanced conclusions.

The Watson platform adds a major piece to information systems that is often unsung. It orchestrates the contributions of the technologies so that they support, balance and inform each other. It feeds back answers, errors, and user interactions to the system so that Watson learns and evolves, as a human would. In this, it removes some of the maddening stodginess of traditional search systems that give us the same answers no matter what we have learned. In seeking answers to complex, human problems, we need to find right answers, perhaps some wrong answers to sharpen our understanding, and certainly the surprises that lurk within large collections. We want a system that evolves and learns, not one that rests on the laurels of a static, often outdated ontology.

Mirroring this technology architecture, the IBM’s Watson Group similarly requires a group of closely knit, strong minded people who are experts in their separate areas of language understanding, system architecture, voting algorithms, user interaction, probability, logic, game theory, etc. Alchemy contributes its staff of deep learning experts, who are expected to join the Watson Group. It also brings its 40,000 developers worldwide, who will broaden the reach and speed the adoption of cognitive computing.