Is AI Accountable?

Is AI Accountable?

Any organization needs to be able to analyze and justify its decisions. There are fundamental business and legal reasons why this is required. If any business decision is challenged legally, a clear recounting of the decision process and data used must be available to justify the conclusion. If a decision turns out wrong, particularly a costly one, the business must be able to analyze it and take steps to ensure it does not reoccur.

Unfortunately, the current state of the art in deep learning networks does not allow this. Sure, we understand how neural networks function. We know how to build, configure and optimize them for various kind of tasks. But remember from pervious blogs, that these networks are training on huge amounts of data and building complex statistical relationships all buried within the nodes and layers of the network. For any given decision, we have no way of interrogating the AI to find out why it made a particular choice.

Obviously, this is a major issue that could forestall the wide-spread adoption of AI for critical business functions. How can we trust something with important decisions that can’t justify those decisions? Serious work has begun within the AI community to address this issue and build some level of justification and accountability into AI systems from the start. But today this effort is just beginning.

Related Blog Posts

Related Case Studies

Our solutions