I made them up. Yup.

I’m talking about the guidance which is:

  • if an index has less than 1000 pages and is in memory, don’t bother removing fragmentation
  • if the index has:
    • less than 5% logical fragmentation, don’t do anything
    • between 5% and 30% logical fragmentation, reorganize it (using DBCC INDEXDEFRAG or ALTER INDEX … REORGANIZE)
    • more than 30% logical fragmentation, rebuild it (using DBCC DBREINDEX or ALTER INDEX … REBUILD)

These numbers are made up. They can and will vary for you, but they’re a good starting point to work from.

There’s been some discussion since PASS, when I confessed publicly on Twitter (during Grant Fritchey‘s session) to making them up, about whether I really said that etc etc. Yes – I really did make them up.

Back in 1999/2000 when I wrote DBCC INDEXDEFRAG and DBCC SHOWCONTIG for SQL Server 2000, customers wanted *some* guidance on what the thresholds should be where they should care about fragmentation or not, and how to remove it. We had to put *something* into Books Online (my favorite “it depends!” wouldn’t have been too helpful), so I talked to some customers, inside and outside Microsoft, did a bunch of experimentation, and chose these numbers as most appropriate at the time. So they’re not *really* made up – they were chosen carefully.

They’re not set in stone – they’re a big generalization, and there are a ton of other factors that may affect your choice of threshold and fragmentation removal method (e.g. recovery model, high-availability technologies in use, log backup schedule, query workload, disk space, buffer pool memory, and so on). I wish Microsoft would update the old whitepaper on fragmentation – they keep promising me they’ll get around to it.

In the meantime, take those numbers with a pinch of salt and don’t treat them as absolute.