Blogging From the IBM Big Data Symposium – Big Is More Than Just Big

Brian Hopkins
Vice President, Principal Analyst
May 13, 2011

Just attended a Big Data symposium courtesy of IBM and thought I’d share a few insights, as probably many of you have heard the term but are not sure what it means to you.

No. 1: Big Data is about looking out of the front window when you drive, not the rearview mirror. What do I mean? The typical decision-making process goes something like this: capture some data, integrate it together, analyze the clean and integrated data, make some decisions, execute. By the time you decide and execute, the data may be too old and have cost you too much. It’s a bit like driving by looking out of your rearview mirror.

Big Data changes this paradigm by allowing you to iteratively sift through data at extreme scale in the wild and draw insights closer to real time. This is a very good thing, and companies that do it well will beat those that don’t.

No. 2: Big is not just big volume. The term “Big Data” is a misnomer and it is causing some confusion. Several of us here at Forrester have been saying for a while that it is about the four “V’s" of data at extreme scale – volume, velocity, variety and variability. I was relieved when IBM came up with three of them; variability being the one they left out.

Some of the most interesting examples we discussed centered on the last 3 V’s – we heard from a researcher who is collecting data on vital signs from prenatal babies and correlating changes in heart rates with early signs of infection. According to her, they collect 90 million data points per patient per day! What do you do with that stream of information? How do you use it to save lives? It is a Big Data Problem.

Interesting also is how Watson works, which we got a look at. Watson deals with the variability characteristic of Big Data – specifically the variability of meaning in natural language and how to use Big Data technology to solve them in less than 3 seconds. Factoid 1 – it took IBM 80 Teraflops of processing power to meet the 3-second SLA. Factoid 2 – Watson only required 1TB of data to do his magic. Of course, the TB was processed out of many PBs of source content using . . . Big Data technology as well.

So what’s our biggest challenge with Big Data? Not surprisingly, I don’t think it’s the technology. I think that the biggest problem is going to be getting our business partners to adapt their thinking to leverage the power available to them via this new technology.

I’ll be discussing this more at the Forrester IT Forum and would love to hear from you there, or on the Forrester EA Community

See you soon.

Categories

Related Posts