Q: Can I automate DBCC DBREINDEX to rebuild all tables on all databases on a server?
Actually, yes and no… With stored procedures you’ll have a hard time changing database context without having to use dynamic string execution…. And so this can add a bit of frustration. BUT – it’s your lucky day (and really, it’s mainly because I’m on a ferry going to Victoria Canada right now and the Washington State Ferry system doesn’t currently have wireless internet access… and well, I have a few spare minutes J).
Anyway, I’ve put together a stored procedure which allows you to enter a command that will be processed in all databases. It’s similar to the undocumented sp_MSforeachdb but it’s got a simpler format. The basic idea is to create a cursor which has the list of all of the databases in it and then while walking the cursor build a string which includes a change in database context switch followed by the command to execute.
SELECT @ExecString = N’USE ‘ + QUOTENAME(@DBToProcessName, ‘]’) + N’ ‘ + @CommandToExecute
SELECT @ExecString AS ‘Command to be executed’
Please note that I have NOT done a lot of error handling nor have I done a lot of testing but this works pretty well for what I’ve used it for and in response to this question it certainly works. It’s a great start for you to use/learn from and build upon for your own procedures. Here’s the script: sp_ExecuteInDatabase.sql (3.76 KB). And here’s how you use it:
exec sp_ExecuteInDatabase N’exec sp_RebuildIndexes’
or if you want to execute ONLY in one database:
exec sp_ExecuteInDatabase N’exec sp_RebuildIndexes’, ‘pubs’
Let me know if you find anything interesting and/or make changes to this one! Have fun.
Q: Is there a baseline amount of improvement for INDEXDEFRAG or DBREINDEX to be worthwhile? Please express your answer looking at % change for logical fragmentation and Scan Density.
Actually, the associated whitepaper does a nice job of correlating table size with defragmentation best practices. I’ll defer to that BUT I will say that IF you have the maintenance window then there’s never a negative in doing this – even for small tables. However, (and as the whitepaper also states) the larger and more active the table the more rewarding (in terms of performance as well as disk/cache savings) a defrag/rebuild is! Here’s the link to the whitepaper: Microsoft SQL Server 2000 Index Defragmentation Best Practices.
Q: In a *typical* scenario (500 tables, 15 large ones) how often would you recommend to go between rebuilding indexes? Ex. lower the fill factor until you go X days before your scan density is below the threshold.
Yep! I think I got this one during the webcast as well BUT – this is exactly my suggestion! If you can find the stable point where fragmentation only begins to occur when you’re about to rebuild then that’s the best choice. In that case you will keep your table the most compact (i.e. at the highest fillfactor) without wasting space ; at the same time you will minimize fragmentation and keep the DML operations fast! Again, the defrag whitepaper has a few additional ideas for you. Here’s the link to the whitepaper: Microsoft SQL Server 2000 Index Defragmentation Best Practices.