Index Management/Maintenance Questions:


Q: Is there a tool or method for monitoring a database for awhile to identify indexes that are rarely or never used – that should be considered for removing?


I think I mentioned this one late in the presentation but it certainly warrants an answer. Unfortunately, the answer is both yes and no. To start with the “no’s” – there’s nothing inside of SQL Server [today] that keeps track of index usage. One option is to create a trace and run it through ITW. ITW has an option on the first dialog that asks if you want to “Keep all existing indexes.” If you deselect this it will give recommendations to drop indexes that are not used by that workload. Be careful! You might drop an index that someone else needs. In SQL Server 2005, the ITW is being replaced by DTA (the Database Tuning Advisor) and it has a “drop-only” mode that allows DTA to only check for the unused indexes.


 


But – this still doesn’t really give you a list of what gets used, what doesn’t. I can certainly think of a way that will be expensive (in terms of profiler cost and your time J) to implement and given a bit of time this might be somewhat useful but not sure?! Here’s my idea, you could capture showplan information into a profiler trace, load it into a sql table and then look for your indexes by name (make sure to use the fully qualified name such as [Credit].[dbo].[member2].[member2Cl] as index names only need to be unique per table). You could even do an outer join with sysindexes to get the complete list of indexes related to the workload. It won’t be pretty but it can certainly be done and it’s still ONLY for the time of the workload. Overall, I think this is too expensive to do. Good news coming though… In SQL Server 2005 there will be a virtual system table that will keep track of index usage since the last server restart or since you reset the values.


 


Q: What is the best way to analyze both the space used and fragmentation of each table? More specifically, is there one procedure that will check all tables in a database, output the results and show me which tables/indexes are foobar?


I don’t think any particular command actually outputs the level of “foobar-ness” but… this one will get you very close. J Using DBCC SHOWCONTIG WITH TABLERESULTS returns almost everything you’re looking for. Use the “pages” column to calculate space ((convert(decimal(10,3), pages) * 8K)/1024 = MB), and then the AveragePageDensity (for internal fragmentation) and then ScanDensity, LogicalFragmentation and ExtentFragmentation (for different forms of external fragmentation). One of the best parts of having returned a tabular set is that you can put the data into a table, verify if/change it and even walk it to analyze it as well as run certain operations as a result of the information you learn. In fact, this is what my sp_RebuildIndexes script does. And check out DBCC ShowContig with TableResults.sql (another demo script) for the definition of the table. Both of these scripts can be found with the demo scripts: MSDNWebcast-Tripp-20040611-SCRIPTS.zip (73.19 KB). Additionally, you can find the scripts on www.SQLskills.com under Event resources.


 


Q: Does SHOWCONTIG keep a table lock? and I was told not to use DBCC SHOWCONTIG on a production server as it can adversely affect performance… Is that true?


First, yes, there is a shared table level lock acquired for the length of the scan. If you run DBCC SHOWCONTIG with the default options that scan will not only scan for external fragmentation but also calculate internal fragmentation. To reduce the time for which these locks are held you can use the WITH FAST option on DBCC SHOWCONTIG. This will ONLY do the scan (returning less fragmentation details) but it’s good if all your looking for is Scan Density! So – this can create blocking and you should consider running it off hours OR consider running it on another server where you restore a backup of your production database. All fragmentation that exists when the database is backed up – will be restored since the restore does not physically change any data/pages, etc.


 


Q: Could you tell us how FILLFACTOR affects performance?


The short answer is that it helps IMPROVE performance by leaving free space on the pages when the index is built, rebuilt or defragged. When building or rebuilding the index you can specify (using FILLFACTOR) the percent fullness for each page. By leaving free space you can minimize splits, reduce fragmentation and therefore minimize the need to defrag as frequently. So – you should set the fillfactor based on the frequency with which you can rebuild… Combining the frequency with which you can rebuild and the rate at which your table becomes fragmented between rebuilds you can start to find the most optimal setting. The more fragmented it becomes then you should lower the fillfactor (on the next rebuild)… and if the table doesn’t become overly fragmented (and there’s still free space by the time of the next rebuild) then you might want to increase the fillfactor… so over time you can find the most optimal setting… but it does take a bit of work! Start with the “monster” tables first – the ones that give you the most problems and the ones that are the largest! With a bit of time you’ll have a more consistently balanced system.


 


Q: On a very large table (32 GB, 20 mil rows) with a clustered index and 10 non-clustered indexes – where would you start? If the length of time it takes to execute is long and you can only reindex or defrag a few indexes per day, which ones would you do and in what order?


Well, this is a great question! But – I’d like to change the focus a bit; if you focus on minimizing fragmentation then that will in turn keep things more optimal. Minimizing fragmentation in the clustered index is done by creating the recommended type of clustering key (unique, narrow, static and ever-increasing). If you meet all of these then the base table should only be getting fragmented due to updates to varchar. If you have lots of these then setting a fillfactor can help. The fillfactor defines the amount the pages should be filled to when the index is rebuilt or defraged. If your index is becoming very fragmented between builds then you should consider increasing this frequency or lowering the fillfactor. So – this is really what you should look into: verifying that you’re setting the fillfactor appropriately between rebuilds/reindexes and then adjusting that schedule appropriately!


 


But – when you still have a lot of fragmentation and still need to do frequent rebuilds – focus on the most important tables first, then the most important indexes first (i.e. start with the clustered). From there it’s hard to say but the wider indexes are probably getting used more so I’d look at those next.


 


And – one other tip – if some of you are getting a lot of fragmentation due to updates, think about: fixed length fields (which is rarely an option), default values (GREAT choice if the updates come in with no value and later get updated) and rebuilds either more frequently or with a more appropriate fillfactor.