Big data is a hot topic these days. Like the “cloud” terminology we’ve been hearing about for the last few years, there isn’t a good definition of what “big data” really is. The best one I’ve seen so far is data that “doesn’t fit in Excel,” which I like. So many people perform their analysis on a spreadsheet of sorts, that if the data doesn’t fit inside their edition of Excel, they’d probably consider it big.
The problem with big data, however, is that it while it contains more information, it can also contain more irrelevant information. That’s noted in this piece on small data (from Brent Ozar, PLF), where the author states the signal to noise ration may be decreased when you examine very large data sets. You may find that there are correlations that appear to causations. With enough data, with enough things to examine, you can often start seeing patterns that aren’t really there. These ghost patterns can lead you to draw incorrect, or at least less correct, conclusions if you do not investigate further and test your ideas on portions of your data set.
Some of you might have noticed fractal patterns like this:
This is a well known Mandelbrot Set pattern. However if we were to zoom in on this picture, we’d find that the patterns repeat over and over again. What holds true for the largest image we have holds true inside smaller sections. The pattern repeats.
The same thing can happen with patterns in business. We may see a pattern in a large set of data, but we should verify that it also holds true for subsections of the same data set before we make a decision based on that pattern.
The Voice of the DBA Podcasts
We publish three versions of the podcast each day for you to enjoy.