Q: How do I interpret DBCC SHOWCONTIG WITH ALL_INDEXES on index id = 255 on a table with multiple text columns. I’m looking at a copy of last night’s production after running Maintenance Plan Optimizations. Scan density on table and all indexes is 99 – 100%. Logical Fragmentation is 0% for everything except text column index, which is over 99% (Extent Fragmentation is 83%). Do these numbers have any negative impact on performance? Are they worrisome?


OK, so the first thing to respond to is the “Logical Fragmentation is 0% for everything except text column indexes” is a bug. This is a recently filed bug. It’s based on using the ALL_INDEXES option with DBCC SHOWCONTIG so if you can walk the indexes individually instead then you can probably avoid this (and you could always tweak my version of sp_RebuildIndexes to do this).


 


Now, related to text indexes (and this is as per the Index Defrag Whitepaper), Logical Scan Fragmentation is not relevant on text indexes. However, Extent Scan Fragmentation is and yes, this is something that could be worrisome IF you return the text/image data to your clients in large requests (meaning LOBs measured in MBs+ or requests where you’re returning a lot of LOBs). If you think you’re doing large scans (or if the text data is very large and generally returned to the client) then you might want to compact this part of the database and even consider moving the text/image data into its own filegroup (but – remember that singleton selects won’t really benefit from this).


 


So, to start – it’s important to just mention this option. When you create a table, you can state where the data is located with the ON clause; you can also state where to place the text/image data with the TEXTIMAGE_ON clause. A sample CREATE TABLE that places data on one filegroup and text/image data on another is below:


 


CREATE TABLE dbo.TestTable


(


          col1   int              identity(100,10),


          col2   datetime     default getdate(),


          col4   char(30)      default suser_name()


)


ON [FG1]


TEXTIMAGE_ON [FG2]


 


Since the table is already created – and very fragmented – the only option you have now is to “move” that data to another filegroup. To do this you have a couple of options depending mostly on table size. The easiest way would be to use the SQL Enterprise Manager to first create a new file and filegroup and then set the Table Properties for the Text filegroup (under properties in Design table) to be this new filegroup. The bad side of using this is that the entire table (and all related objects, constraints and indexes) will also be recreated, as the ENTIRE table will be “rebuilt” using this method. The other option – which is a bit harder to implement is to BCP out and then BCP back in the data; however, you cannot change the filegroup when using ALTER TABLE so – this would only work if you wanted to load it back into the same filegroup in which the data already resides. Depending on how many gaps there are within the file you may not end up with very contiguous allocations at the end of this either.


 


So, I think it mostly depends on how much work you’re willing to do and how much of a gain you think it’s going to have.  If MOST queries only return a singleton text/image value AND the performance of these operations is adequate, then I might not worry about it so much (even though, yes it is fragmented). As per one of the dev team folks – one thing to do might be to test the timings of SELECTs and determine if they are slower than you would like and if yes, then DELETE/INSERT the text/image value to have it re-inserted. BUT – if you have time to shift things around and don’t mind some of the administrative impact then there are a couple of options to review if you are getting poor performance from text/image data. One option would be to separate it on its own filegroup (fyi – there are other management benefits in doing this especially in the area of backup and restore when this is a large portion of your database) and you should consider taking the time to do this off hours. Additionally, it will probably be easiest using the UI. You should really follow these steps:


 



  1. Perform a full backup your database
  2. Change the recovery model to simple or bulk_logged (unless it’s already simple or bulk_logged) and then restrict user access to dbo use only (just in case you need to revert back to the older version). You can choose bulk_logged if this operation is going to take a significant amount of time and IF you want to do periodic log backups while it’s running. If you just plan to do a full backup upon success completion of this operation then you can choose the simple recovery model.
  3. Use the UI to create a new file and filegroup (although this syntax is pretty easy)
  4. Use the UI to change the table’s properties to point to your new filegroup (probably should do some testing with a development version before you do this on production – especially with the same sizing, etc. so you can see how much log space and time this is likely to take, etc.)
  5. Change the recovery model back to what it was (likely full?)
  6. If you switched to simple, perform a full backup of your database. If you switched to bulk_logged then all you need is a transaction log backup.
  7. Allow users back in.

 


As for resources that might help you out more with this question and with understanding some of the management options (in terms of backup/restore for VLDB, see my series on Backup/Restore on SQL Server Magazine. A complete list of my articles is here.


 


Q: After doing index maintenance in Enterprise Manager, the LogicalScanFragmentation was zero for the indexes that I modified. After running DBCC DBREINDEX the LogicalScanFragmentation jumped up to approx 12% on all the indexes. Not sure why this happened, but there must be some difference in the way the indexes were rebuilt.


Also, somewhat related to the first part of the previous question… It turns out that Logical Scan Fragmentation of 0 (as reported by a DBCC SHOWCONTIG with ALL_INDEXES) is actually a bug. It’s unlikely that it “jumped” to 12% but that it may not have been 0 to begin with…


 


But – for completeness – there could be one other factor in it increasing after a rebuild and that’s IF the space into which it rebuilds is NOT contiguous. You *MAY* want to consider rebuilding/moving the object to another filegroup to make sure that it’s completely contiguous but that might not be worth it.


 


Q: (This is from a recent email, fyi) I found a bug in your sp_RebuildIndexes.sql script… It doesn’t work for clustered indexes that have columns set to DESC!


THANKS! This is great. I have to admit that I haven’t had clients with clustered indexes with DESC columns but it could certainly happen and good to know that I have a bug (we all have to have one someday, right? he he)!


 


So, I have updated/fixed the script here:sp_RebuildIndexes.sql (11.08 KB). For completeness, I have also updated the msdn webcast Q&A zip files: Part I here and Part II here and I have updated the version that is listed on my sample scripts page on SQLskills (here). It shouldn’t impact too many people but good to have the right code! Always let me know if you find something. (This fix is from fellow RD Morten Abrahamsen who is an RD for Denmark, Norway and Sweden.)


 


Q: (Also, from a recent email) I don’t quite understand what indexdefrag will do for a clustered index. It is my understanding that the leaf level of a clustered index is the data (i.e. the leaf pages are the data pages). If these pages are being physically reordered and compacted [during a rebuild], wouldn’t that accomplish the same thing as a defrag index?


Rebuilding and Defraging both work on the leaf level of an index – regardless of index type… So a rebuild completely rebuilds the entire structure (so, it’s more thorough). Defrag solely rearranges the leaf level pages in “mini” transactions as it moves data through the leaf level pages.


 


So – your question is really more of a statement – and a correct one too… Yes, the leaf level of the clustered index IS the data so if you perform a rebuild then there’s no reason to defragment.


 


Q: You mentioned that DBCC DBREINDEX and CREATE w/DROP_EXISTING are synonymous and from what I can tell from your documentation as well as other documentation that I have read, the DBCC DBREINDEX does rebuild the clustered index.  We currently have a script similar to the sp_RebuildIndexes.sql that you provided; however, we perform the DBCC DBREINDEX for all indexes including the clustered indexes.  We have a consultant telling us that we must use the CREATE w/DROP_EXISTING for the clustered indexes.  At first I didn’t think this was true based on what I learned from your webcast. However, after looking at your sp_RebuildIndexes.sql script I do see that you are also using the CREATE w/DROP_EXISTING for the clustered index. Can you provide some clarification for me and verify if we should be using CREATE w/DROP_EXISTING for clustered indexes?


Well, yes and no… To be honest… you caught me! J


So, to answer whether or not these are synonymous – well, kind of! There’s one thing you can do with CREATE w/DROP_EXISTING that you can’t do with DBCC DBREINDEX. Using CREATE w/DROP_EXISTING you can CHANGE the definition of the clustering key. If you’re just rebuilding then there’s NO difference….


 


Now, to answer the question of why I use CREATE with DROP_EXISTING for the clustered index… it was just for the challenge! Really, I was trying to see how complex the dynamically executed string was going to be to build in order to supply the entire CREATE INDEX statement. So, didn’t mean to create the confusion! There’s really NO technical reason for it! (Good catch!)