Q: Isn’t there some issue with profiler where it does not recognize DBName but only the DBID?
Well, this is a good one – and a frustrating one as well. It is true that Profiler doesn’t always generate a data column value for every type of event. So – if you setup filters you will only filter rows which have a value for that data column. If you ONLY want to see things with a very specific value it can take quite a bit of time to remove all of the uninteresting events to narrow it down to only what you need. In the SQL Server Books Online there is a topic for each category – for example: Stored Procedures Event Category has a listing of all of the data columns that are produced for each event. This is a bit tedious to go through though and there’s a nice full html version that’s available on Gert Draper’s website: www.SQLDev.net. Checkout this link specifically: http://sqldev.net/misc/SQLTraceEventMatrix.htm
Q: Is it true that ITW is beneficial because it can predict how the optimizer will benefit most from the indexes it recommends? Sometimes I find that the ITW recommends statistics that hinder the query performance. Do you feel comfortable using it for the most part?
Ok, so there are really two questions here – first “how does ITW work?” and second “is it always a good thing?”
As for how it works – yes, you’re correct… ITW hypothesizes about indexes by creating statistics about your data, looking at the queries in the workload and then based on how the optimizer would optimize – it tells you which indexes would be best serve your data!
As for whether or not it’s always a good thing… the first thing I’ll jokingly say is – Is there ever an ALWAYS good, good thing? J Joking aside, there is some truth to this (although rare). I do feel comfortable using ITW and because the statistics are based on real data, not updated if they’re not used (there’s a good whitepaper that explains statistics here), so in general, statistics don’t usually hinder query (or server) performance. However, having an extremely large number of statistics and indexes can increase optimization time (in analyzing all of these choices) and this may in fact be something you’re seeing.
Q: When statistics exists on some of my client tables, SQL Server sometimes chooses to use an index that is less beneficial (even after we update stats with full scan). In some cases, we’ve had to force the index using optimizer hints. As a result, we disable “AUTO CREATE” and only create indexes when performance issues arise and are needed. Granted this isn’t the best method but until the tables become really large we find that some queries work fine.
Ok, well there’s a lot to this one. First, I’d say that there’s a difference between just having statistics and having the right indexes. There are some cases where ONLY having statistics isn’t good enough to give SQL Server enough information to proceed and instead they make incorrect decisions. It’s generally when SQL Server only has statistics for individual columns and doesn’t have better information for a combination of columns (wow, this is hard to describe in a short Q&A). But – I’ll sum it up to say that statistics aren’t always perfect – some assumptions may need to be made and sometimes those assumptions don’t work out because the data isn’t evenly distributed… So, if the indexes that exist are not “perfect” for the query SQL Server ends up not doing as good of a job as it might if it had the right indexes… If it helps, I have a class where I get into more details on statistics (a whole module on just statistics J) that explains a lot more details… It’s hard to sum up in a sentence or two BUT I would guess that a bit more tuning (i.e. more real indexes and not just statistics) might help solve the problem… Really, hard to say in a quick chat but that really shouldn’t be the majority of the time.