In 2019, we predicted that there will be three high-profile, AI-related PR snafus in 2020. It’s only August, and we’ve already seen plenty of examples of AI going wrong. The ACLU sued facial recognition provider Clearview AI for violating a well-known Illinois state biometric law in the US; the UK’s Home Office was forced to abandon its visa-processing algorithm, which was deemed to be racist; and researchers recently found that automated speech recognition systems from Amazon, Apple, IBM, Google, and Microsoft perform much worse for black speakers than white ones.

AI will continue to err. And it will continue to surface thorny legal and accountability questions, namely: Who is to blame when AI goes wrong? I’m not a lawyer, but my father spent his career as a litigator, so I posed this question to him when I kicked off this research. His response: “That’s easy. A lawyer would say, ‘Sue everybody!’”

Regardless of where true accountability (legal or otherwise) lies, that’s inevitably what will happen as regulation of AI rises in high-risk use cases such as healthcare, facial recognition, and recruitment. So, the key is for companies to build and deploy responsible AI systems from the get-go, to minimize overall risk, and to preempt your AI system from performing in an illegal, unethical, or unintended way. You will be held accountable for what your AI does, so you’d better make sure it does what it’s supposed to do.

Third-Party Risk Is AI’s Blind Spot

The AI accountability challenge is difficult enough when you’re creating AI systems on your own. But the majority of companies today are partnering with third parties (technology providers, service providers and consultancies, and data labelers) to develop and deploy AI, introducing vulnerability into the complex AI supply chain. Third-party risk is nothing new, but AI differs from traditional software development because of its probabilistic and nondeterministic nature.

To help our clients reduce these vulnerabilities, we just published this research AI Aspirants: Caveat Emptor, which talks about how to tackle third-party risk and improve the overall accountability of AI systems. In this report, we present multiple best practices across the AI lifecycle for ensuring accountability, such as offering bias bounties and conducting rigorous testing and third-party risk assessments.

I’m so thankful to all of the thought leaders in the AI ethics space who were willing (and quite enthusiastic) to be interviewed for this research. And I’m also grateful to all my peers at Forrester who reviewed the research and challenged my thinking along the way.

Practice Your Principles With Responsible AI

For those of you interested in other research on this topic, this is our third report in a trilogy of reports on responsible AI:

  • Our first report focused on how to detect and prevent harmful bias.
  • The second report focused on explainability in AI.
  • And, of course, this recent report focuses on accountability in AI.
  • We also have an upcoming New Tech report on Responsible AI solutions, where we will shine a light on both established and emerging vendors in this space that are offering ways to build and test trusted AI systems.

Feel free to schedule an inquiry with me if you have any questions. Thanks!