The ethical pitfalls of AI

Senior Analyst Brandon Purcell discusses the current ethical pitfalls and unintended consequences of artificial intelligence and what’s at stake for executives and organizations.

Show notes:

At its heart, artificial intelligence (AI) mimics human behavior and applies computational speeds that humans can’t deliver, yet the imperfections of current AI — driven in part by biased test data and simple dimensioned algorithms — are placing ethics front and center.

In strictly human decision-making, there’s often an opportunity to stop a process and think about the unintended consequences and how they might play to public perceptions and the reputation of the involved firm. That natural check on decision logic is not yet baked into AI, and without that step, AI algorithms have an almost too graceful path between sense, think, and act.

These are not theoretical points but real concerns for executives and marketers who engage different segments of their addressable market differently for sound business logic. But without the embedded “what about?” moment, AI-driven models can create overly flawed, biased models and PR fire drills.

In this episode, Brandon Purcell discusses the current ethical pitfalls of AI; what is causing these issues; what’s at stake; and how executives, marketers, and designers can rethink and rejigger AI data and algorithms to avoid flawed models and public embarrassment.

Listen to all of Forrester’s What It Means podcast episodes.

Featuring:

Brandon Purcell
Senior Analyst