Legal Issues: Can we depend on algorithms to make decisions?

By Sue Feldman

In the cognitive computing era, there are plenty of tough technical challenges. Their difficulty pales, however, when compared to the social and legal issues these new technologies raise. Increasingly, we rely on algorithms to help us sort through the complex factors that lead to making a decision. For most of us, there is no way of knowing whether the algorithm is well suited to handle our current situation. In fact, by their very nature, algorithms cannot be crafted to react dependably to the unforeseeable. Articles by Julia Angwin in the New York Times and ProPublica on Aug. 1st celebrate a decision by the Wisconsin Supreme Court to limit the influence of algorithmic recommendations for sentencing offenders. The algorithms predict the risk that an offender might commit a crime in the future. Based on these recommendations, an offender might face jail time or probation. See http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?ref=opinion&_r=0 or https://www.propublica.org/article/making-algorithms-accountable.

There is no stuffing the algorithm genie back in the virtual bottle. The fact is that we need help in making sense of the welter of data that showers us whenever we make a decision. From choosing a carpenter to treating a cancer patient, the human mind can’t take in every available data point quickly enough to make the most optimum decision in a reasonable amount of time. For the most part, that is not a problem. We don’t need to know everything to make an acceptable decision. There are plenty of good carpenters, restaurants, or books. Rarely are day-to-day decisions a matter of life or death. But sometimes they are. From self-driving cars to medical treatment, when lives are at stake, should we rely on algorithms alone?

Our society tends to rate the accuracy of computer results much more highly than that of human decisions. For some reason, we leave our skepticism behind when recommendations are digital. What has created this aura of infallibility? As a young researcher, I found that I could hand a client the same information in the same words as a digital record and as a photocopy and have the digital version more readily accepted. The believability bias hasn’t changed much since then. It’s time to develop a more mature approach to melding digital evaluations with human common sense. We need to ensure that the path to digital recommendations is transparent and that the underlying data is reliable so that we can judge the conclusions for ourselves. We also need to teach skepticism.

Computers and humans complement each other. Neither is perfect. Combined, human sense making and algorithmic pattern detection make for more complete (but still imperfect) understanding. Angwin says we must require “the right to examine and challenge the data used to make algorithmic decisions about us.”   That’s a good first step.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s