I am curious, using the sample demonstrating serialization of the grid I can boost up the results to 100k
http://blogs.infragistics.com/forums/p/43476/239354.aspx#239354
If I then drop Amount of Sale onto the grid, this should create just about 1 row for every item in the Itemsource. When I do this it hangs the grid for minutes.
I thought with virtualization in the controls this should be fast and only really load as I scroll. Am I missing something?
Hi, gfricke,
This performance is expected. It is not 1column that is added, but possibly about 400,000 if you add the Amount of Sale hierarchy to columns ;)
The thing is that Amount of Sale is meant to be just a measure and thus I have not added hierarchy metadata for it. As a result when you drop the default hierarchy in the grid the control would try to add each unique value as a column for each existing column. Those unique values are a lot and the number is multiplied by the number of existing expanded columns.
Notice that this "browser hanging" does not occur with any of the other hierarchies.
All the best,
Atanas
P.S. Visualization does not help in this case, because all values have to be recalculated based on the new query. If you were displaying non-dynamic data there would have been significant performance boost, but not with cube data.
I meant Virtualization in the P.S. The spell checker "corrected" it.
The pivot grid does support virtualization correct? When does it take advantage of it, after the binding is complete? essentially I want to ensure if I have to wire up 100k rows to it and display them it can handle it.
The situation I am dealing with will be 100k rows driven by dimensions, with a handful of measures, so will it still be able to take advantage of virtualization in that case?
E.g. Imagine many hierarchies of slicing/dicing the product data, but still only showing cost/amount etc. for measures as columns, the rest are dimensions showing as rows...
Yes, the XamPivotGrid supports virtualization. It takes advantage of it immediately after the data is loaded from the data source and the slice computations are complete.
Working with 100k rows is not a problem for the control.
What is time consuming in the scenario that you originally mentioned is calculating the resultant value for every cell in the data source. The number of columns for that particular situation is closer to half a million than to 100k. The number adds up to about 2-3 million cells. Those calculations occur before any UI visualization has taken place and thus the virtualization does not help.
I hope this answers your question.