News Ticker

AI Can Be Made Legally Accountable for Its Decisions

AI-wide

Computer scientists, cognitive scientists, and legal scholars say AI systems should be able to explain their decisions without revealing all their secrets.

By

Artificial intelligence is set to play a significantly greater role in society. And that raises the issue of accountability. If we rely on machines to make increasingly important decisions, we will need to have mechanisms of redress should the results turn out to be unacceptable or difficult to understand.

But making AI systems explain their decisions is not entirely straightforward. One problem is that explanations are not free; they require considerable resources both in the development of the AI system and in the way it is interrogated in practice.

Another concern is that explanations can reveal trade secrets by forcing developers to publish the AI system’s inner workings. Moreover, one advantage of these systems is that they can make sense of complex data in ways that are not accessible to humans. So making their explanations understandable to humans might require a reduction in performance.

ai-explanations-diag.png

Explanation systems must be separate from AI systems, say the Harvard team

How, then, are we to make AI accountable for its decisions without stifling innovation?

Today, we get an answer of sorts thank to the work of Finale Doshi-Velez, Mason Kortz, and others at Harvard University in Cambridge, Massachusetts. These folks are computer scientists, cognitive scientists, and legal scholars who have together explored the legal issues that AI systems raise, identified key problems, and suggested potential solutions. “Together, we are experts on explanation in the law, on the creation of AI systems, and on the capabilities and limitations of human reasoning,” they say.

They begin by defining “explanation.” “When we talk about an explanation for a decision, we generally mean the reasons or justifications for that particular outcome, rather than a description of the decision-making process in general,” they say.

The distinction is important. Doshi-Velez and co point out that it is possible to explain how an AI system makes decisions in the same sense that it is possible to explain how gravity works or how to bake a cake. This is done by laying out the rules the system follows, without referring to any specific falling object or cake.

This is the fear of industrialists who want to keep the workings of their AI systems secret to protect their commercial advantage.

But this kind of transparency is not necessary in many cases. Explaining why an object fell in an industrial accident, for example, does not normally require an explanation of gravity. Instead, explanations are usually required to answer questions like these: What were the main factors in a decision? Would changing a certain factor have changed the decision? Why did two similar-looking cases lead to different decisions?

Answering these questions does not necessarily require a detailed explanation of an AI system’s workings … (read more)

via MIT Technology Review

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: