1 Billion Data Values in a Data Chart

Graham Murray / Wednesday, August 14, 2013

It's no secret that you can provide many XamDataChart series with millions of records and still have smooth interactive panning and zooming. XamDataChart is designed from the ground up to not touch data that it doesn't need for the visual, and to not generate geometry that is needlessly more complex than your eye can perceive.

 Nevertheless, XamDataChart does require that you pass it an enumerable that contains all the data for display, and its data source does cache various information about the data there. So as the number of points you wish to display increases, so does the memory utilization, and eventually even linear time scans through the data can be a problem if you want to maintain interactive performance.

So, what if we want to display extreme amounts of data? 100 million values? 1 Billion values? Well, remember how I said above that most datachart series don't try to touch data values that they don't need? When you are fully zoomed out, the chart must touch every point to get a shape of the line that it can then adapt to the physical size of the plotting area, but when you are zoomed in to a particular area, the chart can efficiently determine what range of data it needs to examine to display just that region.

If we were to load 1 billion data points into the chart and start very zoomed in, then the chart would only be dealing with small amounts of that data at a time, but there are two issues:

  • Even packed tightly on disk, 1 billion date time values and 1 billion double precision floating points take up roughly 15 gigabytes on disk. You don't really want to load that all into memory.
  • If you were to zoom out. The chart would still need to scan all the data values to get the shape of the line.

So, what are our options here? Well, you could try not touching all the data when fully zoomed out. But that's cheating. If we were to use a regular sampling algorithm and its frequency happened to match up with some underlying frequency in the data, we'd get aliasing effects that would destroy the accuracy of the representation. Even a random sampling algorithm may miss out on outlier values that should be visible in the plot.

What if our data source knew the zoom level of the chart and could provide different data depending on the zoom level? What if we pre-calculated multiple resolutions of the data source where we gather the min/max over regions of the data and store the result as progressively smaller files? If we do this, then, the data source, cognizant of the physical dimensions of the chart and the current zoom level, can pick the appropriate resolution of the data so that it can be displayed with high fidelity, but not overload the computing power of the machine.

And as for memory? If the data source tries to keep as much on disk as possible, and skip around inside files, it might be a bit slower than our default data source, but should be able, potentially, to handle as much data as you can store on your disk platters.

Considering the above, I decided to try an experiment:

  • I generated 1 billion random data values.
  • I wrote a utility to successively reduce the number of data values by 2, storing min/max information for each range being reduced to a single point.
  • I ran that utility on the billion data values. This took some time.
  • I enabled our chart to be able to use an alternate data source adapter.
  • I implemented an adapter that does roughly the above.

Here's what the generated data file looks like on disk:

And here are what the multi resolution versions look like:

That's a lot of data!

So, the chart still thinks its talking to the usual data source adapter, but under the covers this extreme data source is sliding data in and out like a chameleon, caching some things, but trying not to put too much in memory. As a result you can interactively zoom and pan through 1 BILLION data values:

So, does this seem interesting? Are 1 billion data values more than you need in a chart? Let me know your thoughts.

This kind of approach has some other interesting aspects as well, even with less extreme amounts of data values. We could use it to make a chart consume less memory on a mobile device with more severe memory constraints, or we could adapt it to read multi-resolution data over a network, to display lots of data in a chart without having to download it over the wire up front. Let me know if any of that sounds appealing.

Also, I had some trouble locating a good, free, time series data source with this kind of extreme data amount for testing, so eventually I just decided to generate a random stream of data, if anyone knows a good data source they'd like me to try, let me know.

-Graham