Conference Questions Pot-Pourri #3: unexpected index rebuild results and bulk-logged operations

A couple more questions from the last couple of classes.

Q1) Why doesn’t performing an index rebuild alter the fragmentation?

A1) Here are the possibilities – all of which I’ve seen happen:

  • There isn’t an index – either DBCC DBREINDEX or ALTER INDEX … REBUILD are being run on a table that only has a heap, and so the (extent) fragmentation of the heap isn’t changing because there’s no way to rebuild a heap (except by the heavily NOT recommended method of creating and then dropping a clustered index).
  • The index is too small. An index with only a handful of pages may not show any changes in fragmentation because all the pages are single, mixed pages (see my previous post on extent types for more info) and so rebuilding the index does nothing.
  • The workload and schema are such that by the time the rebuild has finished and the fragmentation calculation has been done again, the index is already getting fragmented again.
  • The Extent Scan Fragmentation result from DBCC SHOWCONTIG is being used to gauge fragmentation for an index stored in a filegroup with multiple files. The Extent Scan Fragmentation in DBCC SHOWCONTIG does not cope with multiple files (as is documented in Books Online) and so the value may even go UP in some cases!

Q2) What operations take advantage of minimal-logging when the recovery mode is BULK_LOGGED?

A2) The list is very small – 4 four classes of operations:

  • Index builds, rebuilds, or drop of a clustered index (NOT index defrags with DBCC INDEXDEFRAG or ALTER INDEX … REORGANIZE – this is a common misconception).
  • Bulk load operations (i.e. BCP, INSERT … SELECT * FROM OPENROWSET (BULK…), and BULK INSERT).
  • Insert or appends of LOB data (either using WRITETEXT/UPDATETEXT for TEXT/NTEXT/IMAGE data types, or UPDATE with a .WRITE clause).
  • SELECT INTO operations on permanent tables.

For these operations, only the allocations are logged in the transaction log. Any extents that are allocated and changed through a minimally-logged operation are marked in the ML bitmaps (one for every 4GB of each file) and then the next transaction log backup will also read all those extents and include them in the backup.

5 thoughts on “Conference Questions Pot-Pourri #3: unexpected index rebuild results and bulk-logged operations

  1. I am reading this blog again and again in order to understand what counter I have to use to figure out the fragemenation. The database I am dealing has multiple files and all indexes are spread acrosss multiple files (we being told that I/0 path are very different for each file.)
    Let me repeate again that for my case all I have to concentrate is Logical Fragementation counter if tables is not heap!! am I correct or I need look at others counter also.
    Is logical fragemenation > 30 is good criteria ?

  2. I have some large tables where oneline rebuilds is not defragmenting index_level 1, although there are thousands of pages at that level. It looks like all I’m getting is some compaction on pages. It seems like this type of fragmenation in index_level 1 must be a problem, is that correct?

    Here’s an example:

    Table index_depth=4

    **Before online rebuild
    Clustered index:
    Index_level 0 (leaf) –
    avg_fragmentation_in_percent = 0.1
    page_count = 1,041,444
    avg_page_space_used_in_percent = 91

    Index_level 1 –
    avg_fragmentation_in_percent = 93
    page_count = 5,643
    avg_page_space_used_in_percent = 93


    ***After online rebuild
    Index_level 0 (leaf) –
    avg_fragmentation_in_percent = 0.0
    page_count = 1,041,932
    avg_page_space_used_in_percent = 91

    Index_level 1 –
    avg_fragmentation_in_percent = 98.7
    page_count = 5,590
    avg_page_space_used_in_percent = 94

  3. Paul: Here alter table rebuild on heap will remove forwarded records so in this case we can say fragmentation it right?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.