One of the predictions I made at the start of this year was that real-world experiments will become the new development paradigm for next best action in multichannel customer relationship management (CRM). If we consider that multichannel CRM applications are driving big data initiatives, it’s clear that real-world experiments are infusing data management and advanced analytics development best practices more broadly. Increasingly, my big data customer engagements are focusing on CRM next best action, with a keen customer interest in life-cycle management of the analytic applications needed for real-world experiments in marketing campaign and customer experience optimization.

This year and beyond, we will see enterprises place greater emphasis on real-world experiments as a fundamental best practice to be cultivated and enforced within their data science centers of excellence.  In a next best action program, real-world experiments involve iterative changes to the analytics, rules, orchestrations, and other process and decision logic embedded in operational applications. You should monitor the performance of these iterations to gauge which collections of business logic deliver the intended outcomes, such as improved customer retention or reduced fulfillment time on high-priority orders.

The key use case of next best action infrastructure — aka decision automation — is to allow companies to rapidly engage in real-world experiments in production applications and, if they’re bold, in their operational business model as a whole. In a CRM context, you can implement different predictive propensity models in different channels, at different interaction points, using different call-center scripts and message contents, with different customer segments, and with other variables.

Typically, next-best-action professionals use the results of these in-production experiments — such as improvements in response, acceptance, and satisfaction rates — to determine which next-best-action models work best in various scenarios. In assessing the efficacy of models in the real world, the core development practices are the ability to isolate key comparison variables through A/B testing and to iterate tests by rapidly deploying a “challenger” model in place of the former in-production champion model. You can use adaptive machine-learning techniques to generate a steady stream of alternate challenger models and rules to automatically kick into production when they score higher than the in-production “champion” models/rules in predictive power.

Clearly, it can take a lot of CPU horsepower, data storage, and other massively parallel physical resources — hence big data platforms — to deliver optimal performance in real-world experiments that operate continuously across many channels and involve a deep, ever-changing repository of analytics-infused process logic. It also calls for a new type of integrated development environment, the vision for which I sketched in a recent blog post.

It’s with that in mind that I paid close attention to a new visual IDE that SAS Institute demonstrated last week at their annual analyst event. SAS did an impressive demo of how the tool enables agile, iterative development and deployment of high-performance analytic models in a complex big data environment that may involve some combination of massively parallel data warehouses, Hadoop clusters, and in-memory analytics platforms. SAS also discussed a forthcoming rules studio for wrapping rules around deployed analytics for improved decision management. At the heart of their demonstration was speedup of scenario analyses in marketing optimization scenarios — i.e., real-world experiments to drive next best action in outbound offer-targeting applications.

Though SAS’ IDE demo doesn’t cover the full functional breadth that I sketched out in my recent blog (in truth, no commercial tool does — yet), it points up the sorts of comprehensive big data/next-best-action IDEs that we expect the industry to roll out in increasing numbers. Vendors will emphasize features in their development tools that facilitate real-world experiments, such as ensemble modeling , champion/challenger modeling, real-time model scoring, and automatic best-model selection. We also expect an emphasis on big data/next-best-action IDEs that allow you to converge heterogeneous models into a unified repository with comprehensive metadata, governance, collaboration, and deployment tools. EMC Greenplum’s new Chorus tool is also noteworthy in that respect.

What do you think? Are these vendors biting off more than you can chew?