In Print: TechNet Magazine June 2008 SQL Q&A Column


My first magazine article is in print! I’ve taken over the bi-monthly SQL Q&A column for TechNet Magazine and I just received the June magazine in the mail today with my first column in it. Topics covered are:



  • Creating corruption and using page checksums
  • The shrink-grow-shrink-grow trap
  • How many databases can be mirrored per instance
  • A tip on changing the default server port, from Jens Suessmeyer

I’ve also just completed a feature article for either the July or August issue dealing with database maintenance for the ‘involuntary’ DBA – more details when it gets published.


If you don’t get the print version of TechNet Magazine, you can get to this month’s SQL Q&A column at http://technet.microsoft.com/en-us/magazine/cc510328.aspx. There may not be anything new if you’ve been following my blog for a while, but if you’ve just started, it’s worth a quick look.


Enjoy!


PS Let me know if you’ve got any good questions – I’ve already completed the August column but I’d like to hear of any questions you may have for later columns.

4 thoughts on “In Print: TechNet Magazine June 2008 SQL Q&A Column

  1. I didn’t know about the shrink-grow-shrink-grow trick with index fragmentation, but I’m just against shrinking anyway because of the resulting disk fragmentation. Good stuff though.

    One question I get a lot that you might want to tackle is: if we’re using database mirroring, can we use snapshots on the secondary server to do DBCCs without impacting our primary server? (I know the answer, but it comes up all the time, so it might make a good question.)

  2. Paul,

    Congrats on your article. I enjoyed reading it, and look forward to your future efforts.

    Similar to the second topic you covered in the article (on index rebuilds and the shrink-grow trap), I have some questions on statistics updates. I am familiar with the synchronous blocking nature of the standard form of auto-update statistics (as it exists in SQL 2000 and in SQL 2005 / 2008 by default), where the connection / query that triggered the statistics update will be blocked until the statistics update is completed. I am aware of the new optional form of asynchronous auto-update statistics in SQL 2005 and later (disabled by default, but can be enabled via Alter Database DDL) that do not block the triggering connection / query but schedules the statistics update in the background.

    What I am not sure of (despite further research):

    – Are the current statistics (prior to the triggering query) retained while the statistics update is in process, to allow the current and subsequent queries to proceed without blocking? I’m guessing yes, but the documentation is a bit less clear on this (“The query and any other concurrent queries compile immediately by using the existing out-of-date statistics” – does “concurrent queries” mean:

    * Queries that were already started before the triggering query and require the statistics about to be updated

    * Queries that need the statistics about to be updated and started after the triggering query but before the statistics update was completed

    * Both

    Such behavior would be similar to the index rebuild behavior you described in your TechNet column, where the old index is retained until the new index is rebuilt – with a key difference being that statistics usually take up much less DB space than most indexes.

    – Does this asynchronous behavior (non-blocking, queries continue to use old stats until new stats are rebuilt) also extend to manually triggered statistics updates (Update Statistics … DDL command, sp_updatestats system stored procedure, etc.)? Or does it just apply to automatic statistics updates?

    – Will the dropped copies of outdated statistics cause significant DB fragmentation (from free space open slots being later filled by data / index content physically out of order), or are they too small to worry about? Perhaps the space management issues with rebuild new / drop old statistics behavior have always been with us (maybe even with auto and manual updated statistics) and we have not worried about it or the mechanics and timings until the advent of the async update statistics?

    – When the statistics update is completed (sync or async), does this event mark affected cached query plans to be recompiled the next time they are needed (to take advantage of the updated stats)? Or do manual steps need to be taken to mark / recompile cached query plans? Books Online says: “This may cause the recompilation of cached plans that depend on the older statistics version” – but the word “may” isn’t the same as “will”.

    Thanks for your insights.

    Scott R.

  3. Hi Paul,

    Please write an article about the best practice for the layout of LUNs. I found many sources which were hard to understand and somewhat contradictory. I summarized my findings below. But I would rather defer to a "higher" sql authority. Thanks.

    HOW TO LAYOUT DISK LUNS
    By Louis Nguyen

    EXISTING VANILLA DATABASES
    This article assumes we’re not developing a new database or performance tuning an existing database. Instead we’re trying to design storage for existing databases, for applications we have minimal knowledge of. The databases are vanilla databases with one MDF and one LDF file. This article addresses the standard case where a Microsoft SQL Server instance hosts multiple vanilla databases.

    LDF
    Of prime consideration are the LDF transaction log files. To explain, we have to delve into how MSSQL Microsoft SQL Server works. In order for a database transaction to be committed — the disk subsystem must report back to MSSQL that the transaction was written to "stable media" or the LDF [1][2]. (Most disk subsystems have battery backed memory caching, which is another story). These writes to the LDF are sequential. Please note if the database recovery model is set to FULL, database operations such as rebuilding indexes write to the LDF also. For peak performance, each database should only have one LDF file. The LDF file would ideally be located on its LUN, separate from any other database LDFs or files [3]. And the LUN is RAID10. In practice, allocating a separate LUN for each LDF isn’t realistic. However, for mission critical databases, there are very few reasons not to follow this best practice.

    WHAT ABOUT SIMPLE RECOVERY
    Databases in simple recovery mode actively use the LDF [4]. Transactions are still written to the LDF. LDF files can grow exponentially, if there is a runaway transaction. What is different is that after the transaction is committed, MSSQL will periodically truncate (but not shrink) the contents of the LDF.

    WHAT ABOUT MDF
    The MDF main database file, un-intuitively, is of less concern than the LDF. Again we have to delve into how Microsoft SQL Server works. Data pages are stored in a memory cache. This memory cache is known by several names: buffer cache or buffer pool or bpool [1][5]. When a page is modified, it is not immediately flushed to the MDF [6]. Instead, background processes such as checkpoint, eager write, and lazy write will later flush the dirty pages to disk. Several data modifications may be made to the buffer cache, before flushing occurs. When SQL Server has a read request, it reads from the buffer cache. If the page is not in the buffer cache, SQL Server copies the page from disk (MDF file) into the cache [7]. Assuming the server has adequate RAM, it is recommended to locate all MDFs in the same LUN. This LUN should be made up of many disk spindles [3].

    TEMPDB
    If the database heavily utilizes temp tables or features such as row versioning, TempDB MDF should be placed on its own RAID10 LUN. The TempDB LDF should be placed on a LUN dedicated to log files, preferably its own dedicated LUN [3]. Please note that on-line Index Rebuild utilizes row-versioning and TempDB [8]. For TempDB, it is also recommended to have N number of data files, where N is the number of CPUs [9].

    BACKUPS
    For database backups to disk, a separate LUN should be allocated. This is especially true if a database performs frequent transaction log backups. Writing backups to disk are sequential and may interfere with random OLTP activity.

    PERFORMANCE COUNTERS
    To measure how the disk subsystem is performing, one commonly used windows performance counter is Disk Queue Length. However, Average Disk Seconds Per Read/Write Transfer is a more reliable indicator [10]. Average Disk Seconds Per Read greater than 15 milliseconds indicates a bottle neck. Average Disk Seconds per Write can be as fast as 1 millisecond [11].

    REFERENCES
    [1] SQL Server 2000 I/O Basics
    http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlIObasics.mspx

    [2] SQL Server 7.0, SQL Server 2000, and SQL Server 2005 logging and data storage algorithms extend data reliability
    ttp://support.microsoft.com/kb/230785

    [3] SQL Server 2005 Configuration Blog #2.doc
    http://blogs.msdn.com/sqlcat/archive/2005/11/21/495440.aspx

    [4] Checkpoints and the Active Portion of the Log
    http://msdn.microsoft.com/en-us/library/ms189573.aspx

    [5] Buffer Management
    http://msdn.microsoft.com/en-us/library/aa337525.aspx

    [6] Writing Pages
    http://msdn.microsoft.com/en-us/library/aa337560.aspx

    [7] Reading Pages
    http://msdn.microsoft.com/en-us/library/ms191475(SQL.100).aspx

    [8] Row Versioning Resource Usage
    http://msdn.microsoft.com/en-us/library/ms175492.aspx

    [9] Storage Top 10 Best Practices
    http://sqlcat.com/top10lists/archive/2007/11/21/storage-top-10-best-practices.aspx

    [10] SQL Server Urban Legends Discussed
    http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx

    [11] Top SQL Server 2005 Performance Issues for OLTP Applications
    http://sqlcat.com/top10lists/archive/2007/11/21/top-sql-server-2005-performance-issues-for-oltp-applications.aspx

  4. Hi Paul,

    I was interested to read the section on database mirroring and calculating the transaction log rate.

    I don’t know how much there would be in this, but I’m curious about what’s sent ‘over the wire’ in database mirroring as opposed to log shipping. We have a database that’s mirrored and log-shipped, and index rebuilds show up some fairly spectacular differences between the amount of data being sent to the mirror and log ship partners. For example, an index rebuild might result in 8MB of changes (as shown in the mirroring monitor tool), but the log backup from the equivalent period is over 200MB. Is this because the mirror is sent the commands to rebuild the index, rather than the contents of the log? Or do I have wrong end of the stick entirely?

    The other thing that’d be of interest would be how you’d go about estimating the load on the mirror based on the log file.

    Thanks

    Julia

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.