Machine-learning models may occasionally suggest an undesirable action. The action might be either biased or unethical — or just bad for business. Most of the time, the suggestion won’t be wrong. How can developers protect against these bad outcomes?

It’s Simple

You don’t have to do what the model tells you to do. If someone told you to jump off a cliff, would you? Of course not! You put the command into a larger context — the harm you’ll suffer if you obey it — and don’t jump. Operational machine-learning models exist in a bigger context: inside of applications containing business logic to examine the output of the model and then to determine to use the results of the model or not. If the model says “Make that million-dollar loan,” business logic can say “Don’t automatically grant any loan over $1 million.” If the model says “Don’t grant the loan to anyone in a specific ZIP code,” business logic can say “Route this loan decision to a human loan officer” instead.

Digital Decisioning Platforms Make Operational Machine Learning Easier

Developers can write contextual business logic in Java, Python, etc., but the better solution is to use a digital decisioning platform to contextualize machine-learning models, accepting or overriding their recommendations as needed. Digital decisioning platforms provide visual tools that developers, data scientists, and business users can use to create applications that integrate machine-learning recommendations (if not model definition and execution) with decisioning logic in a single package.

Don’t Be Spooked — Do AI

Machine-learning models are fundamental to AI and the intelligent enterprise. They can make business processes smarter and customer experiences more personalized. There are as many use cases as there are applications. Train machine-learning models for as many applications as you can. But to put those models into operation, deploy them within a digital decisioning platform hosting contextual logic. Be safe out there.

See our research on digital decisioning here.