I'm encountering a performance issue when I bind a dataset to a datagrid at runtime. Regardless of the speed of the initial data population, when I issue a "refresh" request, the dataset with the exact same query/results is taking twice as long to bind to the grid. I've read the document in this site that talks about a lot of the WinGrid performance issues and have tried a number of the suggestions with no success.
The logic follows a certain flow:
- separate routine populates a dataset (mygrdxData)
- if there are results, the SetDataBinding is executed to binds that data to the datagrid (see snippet below)
In the first execution the binding process takes about 5 seconds (1000 records with about 60 columns). The second execution takes about 11 seconds (same record, column count and actual data).
Why the doubling of time spent in SetDataBinding on requests subsequest to the first request? What can I do to eliminate this and/or reset grid back to "original state"?
Dim iTick As Integer = GetTickCount
grdx.DataBindings.Clear() : MaxRows = 0If gfIsArrayFull(mygrdxData) OrElse (Not IsNothing(mygrdxData) AndAlso mygrdxData.Tables.Count > 0 AndAlso mygrdxData.Tables(0).Columns.Count > 0) Then 'grdx.DisplayLayout.MaxBandDepth = 1 : grdx.SyncWithCurrencyManager = False grdx.SetDataBinding(mygrdxData, mygrdxData.Tables(0).TableName, False) MaxRows = mygrdxData.Tables(0).Rows.CountEnd IfDebug.WriteLine("SetDataBinding(ms): " & CStr(GetTickCount - iTick))
Hi,
What exactly do you mean when you say "refresh" request? I assume you mean that the same code you have here is being called again, so you are re-setting the grid's DataSource and DataMember to the same values?
Out of curiosity - why do that? It's usually best to avoid doing that if you can, since you will lose all state information for any rows in the grid.
Anyway... my guess is that the extra time is because the first time you bind the grid, it simply has to create the bands, columns, and rows. The second time you bind the grid, it not only has to create new objects, it has to dispose of all of the existing objects.
Doubling the time seems a bit excessive for that process, so it might not be as efficient as it could be, but that's the only thing I can think of that would cause an increase in time.
To verify that, you could break up the process by setting the grid's DataSource to null first and seeing if that takes 6 seconds and therefore accounts for the extra time.
Either way, if you would like to post a small sample project here which demonstrates the behavior you are getting, I'd be happy to check it out and we can see if there is anything we (or you) can do to improve the performance.
To answer "refresh" question - yes, the grid is attached to a specilized "filter" control which allows our user(s) to change conditions of the query before displaying results in the datagrid. "Refresh" option is provided so that the user can issue the same query command to "see" any changes that may have been done by other users.
The result of the query returns an ADO recordset which is then bound to the datagrid. In this particular example, I simply issued the "refresh" command to eliminate the issue that bind may be different due to different records/data within records in the ADO recordset (thinking the bind time would be identical if not close since it is the same # of records and same data in the records).
I modified the code as you suggested (set grid datasource to nothing prior to the binding) - didn't make a difference to the binding time in the second call (see below).
What did you mean by ... "it not only has to create new objects, it has to dispose of all of the existing objects". What objects is it disposing? Is there some way to issue these same/similar commands prior to the SetBinding to eliminate the grid from having to interpret this action on the fly and/or record the grid state prior to the first binding and the (re)set it back to that state state prior to subsequent calls?
As to posting a small project, can't really do this since it involves our filter control and this turns your request into a rather large exercise. I did create a project that simulates similar characteristics (ie, bind 1000 records with 200 columns) BUT it does not show the same behaviour - original load time is quick and subsequent calls are even quicker. Does this have something to do with the dataset being bound, i.e., one from an ADO call to SQL versus one that is contructed in memory?
Thanks
Code Snippet:
Dim iTick As Integer = GetTickCountIf gfIsArrayFull(mygrdxData) OrElse (Not IsNothing(mygrdxData) AndAlso mygrdxData.Tables.Count > 0 AndAlso mygrdxData.Tables(0).Columns.Count > 0) Then grdx.DataSource = Nothing Debug.WriteLine("ClearBinding(ms): " & CStr(GetTickCount - iTick))
iTick = GetTickCount grdx.SetDataBinding(mygrdxData, mygrdxData.Tables(0).TableName, False) MaxRows = mygrdxData.Tables(0).Rows.CountElse grdx.DataSource = Nothing MaxRows = 0End IfDebug.WriteLine("SetDataBinding(ms): " & CStr(GetTickCount - iTick))
First Call:
ClearBinding(ms): 0
SetDataBinding(ms): 5990
Second Call:
SetDataBinding(ms): 11762
James Francolini said:Having said that, is there another event that fires once the rows in a grid have been fully populated (better than the cheat I did in the InitializeRow event)?
No. DataBinding is an ongoing process, there really is no point where it's ever "finished" or "all rows are loaded" because the grid stays in synch with the changes to the data source continuously. So if your solution works, you should probably stick with it.
Thanks for the suggestions .. you mentioned something that made me go look at the code
The SetDataBinding was triggerring a "special" InitializRow event for the grid (created under our specialized filter control). Logic under this event was really only required once the full population of the grid was completed (not for each individual row).
I put a condition in this event <If e.Row.Index = (sender.Rows.Count - 1) Then> to capture the inclusion of the last row and performance when down dramatically - from 11 secs to 0.25 secs.
Proves that you need to be very careful of the placement of code in certain events. Having said that, is there another event that fires once the rows in a grid have been fully populated (better than the cheat I did in the InitializeRow event)?
James Francolini said:To answer "refresh" question - yes, the grid is attached to a specilized "filter" control which allows our user(s) to change conditions of the query before displaying results in the datagrid. "Refresh" option is provided so that the user can issue the same query command to "see" any changes that may have been done by other users.
Is is possible to set it up so that your filtering filters the existing data source rather than creating a new one? If you simply remove/add rows to your existing data source, the grid will be notified of the changes and respond automatically without having to re-bind.
I realize, of course, that this is not always possible.
James Francolini said:What did you mean by ... "it not only has to create new objects, it has to dispose of all of the existing objects". What objects is it disposing? Is there some way to issue these same/similar commands prior to the SetBinding to eliminate the grid from having to interpret this action on the fly and/or record the grid state prior to the first binding and the (re)set it back to that state state prior to subsequent calls?
The grid has to dispose any objects it created, such as UltraGridBand, UltraGridColumn, UltraGridRow, and UltraGridCell object, just to name the most obvious ones.
I cannot be sure that this is what is causing the delay, it's just a guess, but it seems to make sense.
James Francolini said:As to posting a small project, can't really do this since it involves our filter control and this turns your request into a rather large exercise. I did create a project that simulates similar characteristics (ie, bind 1000 records with 200 columns) BUT it does not show the same behaviour - original load time is quick and subsequent calls are even quicker. Does this have something to do with the dataset being bound, i.e., one from an ADO call to SQL versus one that is contructed in memory?
If this is indeed a problem with the grid, then why would it matter whether you are using your specialized filtering? Are you sure you have eliminated the retrieval of the data as part of the time to rebind the grid?
If what you are saying is true, then as far as the grid is concerned, all you are doing is binding the grid to a data source with X records and the re-binding the grid to a new data source with the same band and column structure that has Y records.
If doing that doesn't reproduce the issue, then it would seem to indicate that something about your filtering or the data source you are using is causing the issue.
The only way to really be sure what's causing the delay is to duplicate the problem in some kind of sample and run it through a performance profiler. Otherwise, all I can do is guess. So if you cannot create a sample project that I can run and see the problem, you might want to look into getting a profiler application like Ants.