We've encountered a problem with WARP and WebTab that seems to be what is documented by this post:
http://news.labs.infragistics.com/forums/t/2440.aspx
and a couple more like the post above pertaining to WARP. We think that the async postback functionality of WebTab and WARP make them susceptible to a new error that manifests after installing .NET Framework 2.0 SP1. These errors don't manifest under .NET framework 2.0 without the service pack.
We have a support request in, if we could just get IG to take it seriously. Seems they tested under .NET framework 2.0 without SP1, didn't see the error, and don't think there's a problem. We've followed up with them to let them know about the problem being specific to SP1. Hopefully, we'll see some response soon.
To validate the SP1 issue, we created a test project under VS2005 and VS2008 using CLR 2.0 versions of IG ASP.NET 7.1, then ran it against machines configured with .NET 2.0, and .NET 2.0 SP1. As I mentioned before, the problem manifests under SP1, but not under the non-SP1 2.0 install.
-Jason
Jason,
I assume that the support request that you're referring to is incident number WBT3116? We had originally tested this using .NET Framework 2.0 RTM. Now that we know that you're using .NET Framework 2.0 SP1, and that you also aren't encountering the problem using the RTM of the framework, we're testing your issue in .NET Framework 2.0 SP1 as well. The Developer Support Engineer on this incident will follow up with you once his test is complete.
Please rest assured that we are taking your support request seriously. Developer Support occasionally doesn't have enough information to diagnose a reported problem immediately, and so we try to give all of the apparently-relevant information as to what we've seen so far - including a sample project, if we're able to make one. It's important for us to do this, so that you can give us feedback as to what is being done different, and so that we can recreate the issue you're encountering - or, in some cases, to confirm that what we did differently works as a solution for your situation. Just because we can't reproduce the problem doesn't mean that we believe that there's no problem, but instead means that we don't yet know enough to see what's happening so that we can determine why.
Vince
Your assumption is correct. Though we were less than enthusiastic about the initial response that we got from support, hindsight suggests that we probably didn't know as much about this at the time that we received the first response from your support teams as we do today. If there was any lack of clarity or detail on our part, we do take responsiblity.
I think that the suggestion of dismissal came about due to a statement in the second followup to additional information that we sent along, when support stated "as such there is no incompatibility between WebTab control and MS validator." This type of statement, when we're watching the failures manifest before our eyes, is not well taken, as I'm sure you understand.
So, that leaves the customer to troubleshoot the possible reasons for the problem, and to start IG's troubleshooting work in tracking down the problem that -- were it but for a bit of trust on the part of IG -- could have, should have, been a presumptive fact.
We do development, too, and we ALWAYS take our customers reports seriously and NEVER tell them that the problem isn't there. We just assume we have to find the conditions that caused it.
You're very welcome for whatever clarity I may have shed on this transaction with my last response, but it occured to me in reading your response that I'm breaking one of my own cardinal rules in not telling you what I DID like about the interaction with your support staff, and not just the part that didn't work for us.
Your support tech took the extra step of providing screen capture video of his interaction with the sample project that we sent along with our request for support on this issue. Watching that video made it clear to us that there was some systemic variation causing results to be different. That video was a very good point of departure in our internal validation process for reports of this type. It demonstrated to us SOMEONE needed to do a variant or multi-variant analysis on this problem, and -- since it apparently wasn't going to be IG -- we set out to look for reasons why your results were different than ours. That project led us to look for documentation of similar experiences and to look for recent changes in our systems that may be included in variant analysis.
We could exchange many messages about where that variant or multi-variant analysis should have been homed: here or IG.
One last point. We've found that a good analysis tool is a system information report (SIR) for each production or development platform representing a source for a bug report. Windows XP professional, for example, includes this mainstay Microsoft tool We have developed internal tools that perform string searches and comparisons between system reports generated by our customers and submitted with bug reports, and our internal test platforms on which we validate reports of bugs/problems. If we get different results than our customers report, our first stop is to run a comparison on the differences between the test platform and the customer SIR. That comparision, to a trained tech, can point to the most promising possible explanations for differing results.
Thank you for your follow-up.
It's very important to me that you've mentioned what you found was inappropriate about the response my staff provided. Looking at the response you were sent in more detail, it is more clear how it seemed as though we were saying, "there is no problem because we don't see it." The message should have instead said, "we don't have enough information to identify the problem."
Just as how you've described, when we can't reproduce a problem our customer reports, our assumption is always that we don't have enough information to see it, not that the problem isn't there. It is and always should be a presumptive fact that a customer reporting a problem to us is encountering that problem, whether we can see how it's happening or not. I'm taking steps to ensure that my staff does a better job of communicating this point, as well as communicating what the next step is. At worst, if we have no specific troubleshooting suggestions or questions, the "next step" would be that we need additional information.
Regarding the technical issue itself, I see that my staff was able to reproduce the behavior when we tested with .NET Framework 2.0 SP1. We've reported this to our developers for further research, which in this case I expect will lead to a code change that you'll receive as part of an upcoming hot fix.
Again, thanks for the feedback, I apologize for the miscommunications from my staff, and I hope that this issue is now on the right track to being resolved.