Skip to main content

One post tagged with "trust"

View All Tags

The wrong question about AI trust

· 16 min read

This is the first post in a six-part series on AI delegation, trust, and authority.


"Can I trust AI?" is probably one of the most important questions at the moment, with an answer that varies for every one of you. Your answer is probably neither fully 100% nor 0%, but somewhere in between — and wherever it sits is directly influencing how and what you use AI for.

"

Trust is a feeling. These five questions are a framework.

But trust is a feeling. Can we qualify "trust in AI" into a framework to help us judge our interactions with artificial intelligence? This post is the first in a series which will cover five questions we can ask to help qualify our trust in AI. Those questions are:

  • What is AI allowed to change?
  • Will AI do the same thing twice given the same input?
  • Can we observe what the AI did?
  • How many decisions can the AI make on our behalf?
  • Does the AI have permission to say so when it can't do something?

If instead of AI we were talking about a new human junior hire, these may be similar to questions a good manager would ask. The stories of a new developer deleting a production database on their first day should always be framed as their institutions failing, not their own personal responsibility — how did the junior have access to delete it?

Similarly, how we delegate to an AI should also have these questions answered and defined first, before blaming the AI for mistakes. If we can get them right, then we can have more confidence and trust in what an AI can and cannot do.