From tech vendors to governments to industry consortia to the Vatican, it seems that every day a new entity is announcing its set of AI principles — broad guidelines for how AI systems should be responsibly developed, trained, tested, and deployed. On the surface, this is great news, as it demonstrates an awareness of the fact that AI unleashed on the world with little forethought could have disastrous societal and economic consequences. In an enterprise context, this means assessing the risks and benefits of AI adoption carefully. Unfortunately, however, developing a list of lofty principles and deciding how to put those principles into practice across your organization are two very different things. For example, fairness sounds like a great goal, but there are at least 21 different definitions of fairness you can implement in your AI models. The devil, as always, is in the details.

In general, most entities’ AI principles to develop safe, ethical, responsible, trusted, and acceptable AI have coalesced around a set of five areas (though they may go by different names): fairness and bias, trust and transparency, accountability, social benefit, and privacy and security. Forrester has extensive existing and upcoming research on how to apply each of these AI principles to your business:

  • Fairness and bias. This principle is concerned with ensuring that artificially intelligent systems do not harm people and customers through inequitable treatment. The report “The Ethics Of AI: How To Avoid Harmful Bias And Discrimination” explains how AI systems can inherit bias and provides guidance on preventing bias from an organizational and technical perspective. On the supply side, there’s a new market emerging of technology vendors and service providers who offer toolkits and frameworks to help ensure the ethical development and implementation of AI systems. We are just kicking off a New Tech report on this nascent market. If you provide a solution or services in this area, please email my research associate Aldila Yunus at
  • Trust and transparency. Since many AI systems are black boxes or unintelligible to human beings, there is often a need for explainability/interpretability. The reports “Evoke Trust With Explainable AI” and “What Software Developers Don’t Know About Neural Networks Could Hurt Them” discuss the explainability/accuracy trade-off in machine learning, provide a framework to determine when to prioritize one over the other, and talk about what developers should consider before adopting AI technologies. As AI gets infused into many types of new and classic software applications, the report “No Testing Means No Trust In AI: Part 1” is a call to software developers who will need to apply testing disciplines and practices to AI-infused applications and AI-based autonomous software.
  • Accountability. AI systems are often the result of a complex supply chain that may involve data providers, data labelers, technology providers, and systems integrators. When an AI system goes wrong, then who is to blame? And how can you prevent it from going wrong in the first place? We are currently researching this topic and plan to publish our report in June. If you’d like to contribute to this research, please contact my research associate Aldila Yunus at
  • Social benefit. Many technology providers and countries stipulate in their principles that AI should be used for the greater good of society. This is happening today as companies are trying to leverage AI to develop a COVID-19 vaccine. It also aligns with the expectations of a growing and influential group of values-based customers and employees that Forrester has written about in reports such as “Live Your Values To Grow Your Business.” This is an area of particular interest for me, so if you have examples of AI creating a positive social impact, please reach out.
  • Privacy and security. As AI systems are trained and then used to differentiate treatment, they need to respect individuals’ privacy. Between GDPR and CCPA, this is the principle that has already seen the most legislation. And of course, we see additional legislation stemming from the fierce debate over the legality of facial recognition, as well. Forrester has a rich trove of research here. To start, please see our data security and privacy playbook.

These principles need not remain esoteric and abstract. As we all shelter in place, we have the opportunity to pause and think about the future we want to create. One thing is clear — that future will be saturated with AI. Whether that AI aligns with your values or not depends on decisions you can make today as an enterprise.

We at Forrester are always here to help you navigate this somewhat uncharted territory. Please feel free to schedule an inquiry with me or my colleagues to discuss further.