The impact of logging – terms and confusion!

First – for what is logging needed?


This seems like an easy question – with possibly an easy answer… it’s to aid in transaction durability and help in recovery – when the system loses power. Simply put, the transaction log is a way for SQL Server to ensure that a transaction “survives” a power failure. While a transaction is processing, information about that transaction exists within memory. Once that transaction is complete, log rows are written to the log portion of the database on disk. In the event of a power failure – and when SQL Server restarts, SQL Server performs restart recovery (two phases – REDO and then UNDO). Restart recovery happens everytime SQL Server starts; this ensures that completed transactions are persisted into the data portion and that no incomplete transactions end up within the database. For this discussion the specifics about log rows are not important – just that they are enough to “redo” the operations from *just* log information…in the event of a power failure. The information that’s needed to recover the log in the event of a power failure – is really just – what’s on disk. The data portion is probably out of date (how much so?) and the information kept in the log is used to bring the data up to date. A good question at this point is – how out of date is my data? The answer depends on a background process that runs almost solely to minimize this restart recovery process; it is called CHECKPOINT. A checkpoint occurs to make the data and log more current (but not neccesarily transactionally consistent). What this means is that periodically what is in memory is “synchronized” to disk. Since users do NOT directly read from disk, the data portion of the database (on disk) does not need to be up to the minute. Users accessing data ONLY read from cache – which is current – so only the data in memory needs to be accurate. It is VERY possible that at any given time that not only is your disk out of date but it’s not even transactionally consistent. This is NOT a problem. If memory were to be lost (i.e. a power failure) then SQL Server would perform recovery on restart. Restart Recovery runs everytime SQL Server starts. In fact, if you think ONLY about restart recovery needing to bring a database “forward” after a power failure then you could argue that SQL Server would not need information to stay within the log after it’s been “synchronized” with the data portion of the database – as long as the transaction(s) had been completed. And – YES – that’s true. You can choose to clear the information from the log by changing your recovery model. Where you might have a problem is when you have a more significant failure – such as the loss of a hard drive (and even more interesting – which hard drive: a data drive or a log drive).


Key points:



  • The Log is a “write-ahead” log

  • The data on disk is NOT guaranteed to be accurate without the transaction information in the log

  • The Transaction Log (on disk) ensures transaction durability

  • Restart Recovery happens everytime SQL Server starts

OK – so that’s it for now… In the next blog entry, I’ll tackle “what affects logging.”


Thanks for reading,
kt

3 thoughts on “The impact of logging – terms and confusion!

  1. Hi Kimberly. I know posting a question off topic is not very nice but I wanted to ask you a question regarding Database Mirroring.

    I attended your Techready Hands-on session about Database Mirroring and I recall you showing an application that allowed mirroring monitoring by showing the number of rows commited to the principal and mirror databases.

    I’m delivering a session on this subject next week and I can’t find a way of determining how many rows are in the mirror database because the database doesn’t allow any connections to it.

    How did you build the application? How can I check the number of rows in the mirror database at any given point in time? Certainly nor through select count(*)…

    Any help would be much appreciated.

  2. Hey there Nuno – No problem on the off topic question… But – the DDM (Dual Database Monitor) does not actually query into the mirror database. The mirror database is in a NORECOVERY completion state and in this state, you cannot accuess or query the rows, etc. Having said that – there is no way to "confirm" with certainty that rows have been written to the mirror by querying it. However, if what you want to do is make sure that your principal and mirror are in sync – then you can query the sys.database_mirroring catalog view. This will return what your mirroring partnership status is:
    * Synchronizing – the mirror hasn’t received everything yet (note: in synchronous mirroring this only happens if the mirroring partnership has somehow become suspended)
    * Synchronized – the mirror is up to date
    * Suspended – the mirror server is unavailable
    * Disconnected – the mirror server is available but the mirror database cannot receive the modifications.

    In the end, the best way to "monitor" mirroring in RTM is to use the catalog views and the dmvs. Look forward to SP1 for additional ways to get more details on monitoring.

    Have fun!
    kt

  3. Hi Kimberly,
    I’ve started with SQL Server almost 10 years ago comming from a Sybase environment. Am I mistaken that in those days, performing a truncate table fe, forced you to perform a full DB backup because after a ‘non-logged’ operation like this, no transaction log data was available for auto-recovery?
    I tested this a few years ago on SQL2000 and it accepts tran log backups after non-logged operations. So what happened in the mean time?

    Cheers
    Geert

    PS when will you be visiting Belgium again?

Leave a Reply to Geert Cancel reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.