MSDN Webcast Q&A: A Primer to Proper SQL Server Development, Part 2 of 10

MSDN Webcast Q&A: A Primer to Proper SQL Server Development

Creating a Reliable and Automated Backup Strategy, Part 2 of 10

Presented by Kimberly L. Tripp, SQLskills.com

Q: Can I view a recording of this webcast? Here’s the specific replay link.

Q: Where can we get the demo scripts? The demo scripts are in this zip (20050812MSDNWebcast.zip (4.86 KB)); here in this blog entry. However, I will also create an entry www.SQLskills.com under Past Event Resources for the entire webcast series.

Q: Does Kimberly L. Tripp have a blog? Yes, www.SQLskills.com/blogs/Kimberly and if you’re reading this on my blog…well, then you already know! J

Q: Where are the links to all prior Webcast Q&As from this series? Click here for the MSDN Download for Part 1, click here. For the SQLskills Blog Entry for Part 1, click here.

Q: Is this Session 2 of the "Effectively Designing a Scalable and Reliable Database" webcast series? The Title bar here reads "MSDN Webcast: MSDN: A Primer to Proper SQL Server Development (Part…)". A bit misleading don't you think? So, I finally figured out the mystery here. When I originally submitted my proposal for this series I titled it like this: Effectively Designing a Scalable and Reliable DatabaseA Database Developer’s Primer to Proper SQL Server Development (Level 200-300) But somehow, through a few edits and since each module also has a more detailed title like the following, then I think the title just got edited down to fit. In fact, with subtitles like the following… I can’t even fit the entire title, subtitle and session title on a single slide. JPart 1: Creating a Recoverable DatabasePart 2: Creating a Reliable and Automated Backup StrategyPart 3: Designing Tables that Scale – Best Practices in Data Types and Initial Table StructuresSo, the mystery ends here… It’s all the same content and the general idea is that building a scalable and reliable database takes a variety of best practices that together make it happen; there is no “DBCC MAKEITGOFASTER(‘TRUE’)” option.

Q: WRT "best practices," what do you think of SQLBPA? It’s a great (and free J) tool that can help you to recognize some of the most commonly overlooked best practices. Part of the intent is also to help you determine if you’re violating certain practices that might impact your upgrade strategy. For example, if you’re accessing the system tables you will want change that code to use information_schema views, stored procedures (sp_*) or system functions to gather the system information. If you’re looking solely for information about your upgrade path, the BPA has been updated/replaced with a new tool called the Upgrade Advisor.

Best Practices Analyzer Tool for Microsoft SQL Server 2000 1.0 

Upgrade advisor URL – Microsoft SQL Server 2005 upgrade Advisor – CTP June version

Q: In the SBS2003 environment the default backup does a live backup using a volume shadow snapshot of the MSDE and SQL databases on the server. How bad is this practice v. doing a SQL backup and backing up the backup and the logs? Well, there are really a couple of issues here… First, the positives of controlling backups within SQL Server:(1)   You can backup databases individually(2)   With certain database backup strategies (specifically, Database/Log strategies in the FULL Recovery Model), you can recover up to a specific point in time (P.I.T. Recovery) or to a marked transaction or even up-to-the-minute (if the transaction log is still accessible at the time of the disaster.(3)   The process of backing up the transaction log helps to keep the transaction log small and manageable.Second, the negatives:(1)   You have to manage each database backup strategy individually – however, tools like the Database Maintenance Wizard can help to minimize this.(2)   You’ll need to do backups more frequently if you want to take more advantage of these features – however, this would also give you better granularity on your backups AND result in less data loss because your snapshots probably aren’t very frequent.Now – as for volume shadow snapshot – this is GREAT for a complete image of the system and it does work (especially well) when you have to replace the complete system or when you want access to specific files on the system. Where you can run into troubles (and I’m not saying that this is always going to be the case, one way or the other) is when a SQL Server database becomes damaged and you revert back to just a file (or even all of the database files) from a shadow copy image. In almost all cases (and I really can’t think of too many where this would work except in some read only cases), you won’t be successful *just* restoring an old image of just a single database file. So, the damaged database *might* end up being lost. Instead, you could restore the entire database image from the shadow copy point in time and that *might* work; however, if the database structures have changed since you backed it up then the best choice would be to drop the database, copy over the files (the mdf and ldf) and then re-attach the database. That should work and all you’ll need to do from there is add any logins and/or changes since the last backup. Where things could be a lot worse is if one of your system databases becomes corrupt. This might limit your ability to recover. Now, having said all of this – I *know* that your SQL backup strategy WILL work and I know the complete set of options related to it. While it is a bit more administrative work, if your data is critical and minimizing data loss is *very* high on your list; I would use BOTH. Volume Shadow Copy for the system and file recovery strategies and SQL Server database and log backups for your databases. OK, so I decided that I wanted to check things out a bit more…especially, since I am not all that well-versed on VSS; I decided to look up as much as I could find and well, I’m going to stop with this. First, pre-Windows 2003 sp1 it looks like even backing up a volume that includes databases that are in the FULL or BULK_LOGGED Recovery models, generates an error. As of sp1 (or the hotfix associated with this KB), the error is no longer generated BUT I did find the following paragraph very useful:

This hotfix does not let Windows Backup perform a Volume Shadow Copy service backup of a SQL Server database with a recovery model that is set to Full or Bulk-Logged. You must use the SQL Server backup procedure to back up your SQL Server databases that have a recovery model that is set to Full or Bulk-Logged. And – if you do a backup with NTBackup, it will log a message in your backup log of the following: NtBackup does not support backing up SQL databases which are configured in non-simple recovery mode.If the database has been backed up, it is likely corrupted. Please do not restore this database.The recommended way of backing up non-simple SQL database is to use the backup solution that comes with SQL Server.

So, while I’d really like to dig into this deeper – I have to use my best judgement at this point and say that you should probably work to handle backups through SQL Server.

Q: What is the default recovery model in MSDE and SQL Express? I ended up demo’ing this in the presentation. I was sure about MSDE but I wanted to do a quick demo on SQL Express. The quick check is just to check the database properties of the model database… Another option, create a new database and see what the properties are. For both MSDE and SQL Express, the default database recovery model is SIMPLE.

Q: How would you compare the native backup to products like LiteSpeed from Imceda (which is used internally at Microsoft – I'll avoid the typical dog food comments) and SQL Backup from the chaps at Red-Gate? Third-party products can offer significant additional benefits over just the native backup types – for example, encryption and compression. This can result in improved security, faster backup times and smaller backup files.  

For more details about SQL Lite Speed, please go to: http://www.imceda.com/For more details about SQL Backup, please go to: http://www.red-gate.com/

However, I would also like to make a comment about the fact that Lite Speed is used internally at Microsoft. Backup products are an interesting special case in terms of how they work with regard to SQL Server. Most backup products use the VDI interface (the Virtual Device Interface provided for SQL Server) in order to get access to essentially the same pages that SQL Server Backup does… So, in fact, using a third-party product like SQL Lite Speed, is actually still testing the VDI interface and still working SQL Server pretty well.

Q: Can you use 3-rd party backup with log shipping? Depends on whether or not they support it and/or have automated software to help you configure it. Many products do offer ways to help automate log shipping with compressed versions of your backup files… See earlier links for references to other products.

Q: How do you fix a log file that won't truncate? The best way to fix a transaction log that won’t clear:(1)   Terminate all connections to the database (best to set the database to SINGLE_USER mode)(2)   Backup the transaction log with a normal BACKUP LOG command(3)   Shrink the transaction log to the smallest size possible (DBCC SHIRNKFILE WITH TRUNCATEONLY)(4)   Alter the database to increase the transaction log to a more appropriate size with ALTER DATABASE.(5)   Return the database to MULTI_USER mode.These steps are exactly the same as the steps necessary to remove fragmentation in VLFs. The file that can help you to get more familiar with this process is located in the scripts associated with this session and is named: VLF Optimization – Northwind Walkthrough.sql.

Q: If you do the steps recommended to clean up the fragmented VLFs, does it break the continuity of the transaction log backup chain? No! You can backup the transaction log before, as part of (step 2) and after you clear up the fragmentation – and no other backups are necessary. It is a complete chain that can be used in recovery.

Q: The transaction log portion of the tempdb seems to be growing and growing- even with the simple recovery model. In fact, it only seems to release space when CHECKPOINT is run manually? Well, I have to admit that this is a bit strange and should not otherwise be the case. I would suggest seeing if you have any open transactions in the database (DBCC OPENTRAN) but that shouldn’t make a difference if it DOES clear when you execute CHECKPOINT. So, I have to admit that this is a bit bizarre. I would try to do some profiling and see if you can find any consistencies with the types of commands that are running at the time when it doesn’t seem to clear. I would also wonder if this is ALL the time or just occasionally. You may want to contact PSS for more assistance with this.

Q: TRUNCATE TABLE does not benefit from BULK_LOGGED? When I say that something benefits from the change to the BULK_LOGGED recovery model, it means that the change must offer a performance gain or otherwise. In the case of TRUNCATE TABLE, it is always executed in a manner which is logged – efficiently. So, there is no gain in switching; it is always an optimal command to execute.

Q: We have a habit of switching to "bulk-logged" mode when we are doing index -rebuilding, etc. which seem to benefit from this setting. Then, when we are done, we "switch" back to full mode. Does this affect our ability to recover later on using the logs that we are creating because of this switching back and forth? and Q: If I switch between the FULL and BULK_LOGGED recovery model, do I need to take full database backup after the switch to BULK_LOGGED? No! In fact, it’s recommended as long as you’re aware of the options that you lose when you are running in the BULK_LOGGED recovery model. However, the transaction log continuity is not broken – given the fact that you can successfully back up the log after the bulk operation completes. To minimize the time of which you are vulnerable, make sure to backup your log right before the switch (to BULK_LOGGED) as well as right after the switch (to FULL).  The file that can help you to get more familiar with this process is located in the scripts associated with this session and is named: DBAlterForBatchOperation.sql.

Q: Is it advisable to use transactions for each and every SQL statement? Even if they are a single, simple, one-command transaction? If not, how can we use the marked transaction? Well, the answer to the first part of the statements are: NO, it is not necessary and it is not advisable to wrap every command in a BEGIN TRAN/COMMIT TRAN block. However, as per the second part – you cannot mark a transaction without using the following complete syntax: BEGIN TRANSACTION TransactionName      WITH MARK 'description of mark'SQL_StatementsCOMMIT TRANThe file that can help you to get more familiar with this process is located in the scripts associated with this session and is named: Restore Log and Marked Transactions.sql.

Q: Do you dynamically create Transaction MARKs to make the mark names unique? I would assume a restore would stop at the first mark it came across with the specified name? You can certainly use dynamic string execution to build your transaction mark names dynamically. However, you can also use a mark with a datetime parameter. From the BOL: The STOPATMARK and STOPBEFOREMARK options both support an AFTER datetime clause. When used with datetime, mark names need not be unique. In a RESTORE DATABASE statement, datetime is required to recover to a marked transaction. In a RESTORE LOG, however, datetime is optional. If AFTER datetime is omitted, roll forward stops at the first mark with the specified name. If AFTER datetime is specified, roll forward stops at the first mark having the specified name exactly at or after datetime.

Q: When configuring a database maintenance plan through the Database Maintenance Plan Wizard, there is no possibility to switch between recovery models…Correct. However, you can modify the jobs created by the Database Maintenance Plans to include changes to the recovery model. BUT – having said that, I’m not sure why you’d want to change it during the actual backups… Instead, you should consider changing the recovery model as part of your batch process that would benefit from the change.

Q: How do you clear the log in SQL Server 2000 through the Enterprise Manager? The only way to clear the transaction log through the SQL Server Enterprise Manager is to do a regular transaction log backup. The way you can do this is to right-click on databases, tasks, backup – then choose a transaction log backup. The default behavior is to clear the inactive portion when the backup completes.There is no way to just “clear the transaction log” as that would break the continuity of the log. You could execute a command to clear the log but if you find that you’re manually clearing the log and NOT keeping transaction log backups, I would recommend setting the recovery model to SIMPLE so that the transaction log is cleared automatically.

Q: Can you explain when you need to recover .mdf and .ldf files? Recovering directly from the .mdf and .ldf files (through sp_attach_db) is really only guaranteed when the files were closed properly – through either a sp_detach_db or the files were copied when the server was shutdown.

Q: How do you recover if the master database becomes corrupt? Is the only option uninstalling and reinstalling SQL Server? No, SQL Server 2000 includes a utility to rebuild the master database called rebuildm. SQL Server 2005 uses setup – but with special parameters that only rebuild the system databases as opposed to do a full installation.

Q: Any BP on restores on replicated database i.e. distribution matching database? Transaction Log marking can help to ensure that multiple servers are all restored and rolled forward to the same point in time. Additionally, there are options related to whether or not you KEEP_REPLICATION (this is for the publisher). However, outside of those couple of options, you should review the Books Online for best practices in this configuration.

Q: Is it possible to backup database in the standby mode? No. But, I wasn’t completely sure during the webcast so I decided to try it. On the backup, I received:Database 'test' is in warm-standby state (set by executing RESTORE WITH STANDBY) and cannot be backed up until the entire load sequence is completed. BACKUP DATABASE is terminating abnormally.

Q: With log shipping and 2000, and the destination database is exactly the same as the source, how about backing up the destination rather than the source? What happens to the shipped logs during the backup? You can’t backup a database that is in the NORECOVERY or the STANDBY recovery completion states. As a result, you can only backup the source, not the destination.

Q: What are my risks if I use Simple Recovery and what are the advantages of it? Risks are mostly in the potential for work-loss exposure and data loss in the event of a disk failure for your database. The advantages are in simplicity and ease of management because you don’t need to backup the transaction log. However, through simple automation procedures you can automate the backups of transaction logs while getting the most options for recovery. If you want more details, you should review the chapter that’s available for download on SQLskills. This chapter covers the differences in the Recovery Models as well their pros and cons in management and recovery! Here’s a direct link: https://www.sqlskills.com/resources/SQLServerHAChapter9.pdf 

Q: How come you have both the SQL Server 2005 and Express on the same machine? Is it because you don't have Visual Studio 2005 installed? The documentation of VS.Net 2005 states to uncheck the SQL Server 2005 Express if also installing SQL Server 2005 CTP. Part of the reason that Visual Studio doesn’t want you to install Express if you already have the CTP of SQL Server is that they may not be the same versions. However, if you have the appropriately matched builds of SQL Server and Visual Studio, you can install multiple versions of SQL Server on the same machine- with Visual Studio Whidbey. Having said that – of what’s available today- the correct combination would be SQL Server CTP June (IDW15) with Visual Studio Beta II. On my machine, I’m actually running SQL Server 2000 (as a default instance), two Developer editions and one Express edition. AFTER SQL Server was successfully installed, I installed the appropriately matched Visual Studio installation. There won’t be any need for this once the products release in November but while they’re both being developed – and while there are “later” releases that leap-frog over each-other, it’s a bit confusing.

Q: SQL 2000 Maintenance Plans for transaction logs (say, all user databases) indicate job failure if just one user db is in the SIMPLE recovery model. Any alternatives short of creating our own backup script to avoid this SQL Maintenance backup job failure? Yes, this is because the sqlmaint.txt does not verify the recovery model.

Q: Is this a SQL 2005 behavior too? Database Maintenance Plan Wizard has been changed significantly in SQL Server 2005 and it’s now a lot easier to create more interesting/more complex plans on one or more databases – I would work harder to create a class of jobs for your SIMPLE recovery model databases (just full backups and maybe differentials) and a different class of jobs for your FULL/BULK_LOGGED recovery model databases.

Q: Does the execution of a transaction log backup impact simultaneous database usage? If well optimized the cost of a transaction log backup should be relatively low. Might be a good time to make sure that the transaction log is optimized! Go figure, I wrote another blog entry to help you optimize the transaction log. Check out 8 Steps to Better Transaction Log Throughput.

Q: Why does the log backup restore slower than restoring a full database backup? In general, a transaction log is a more intense operation. When restored, the transaction log must be redone (“redo”) and then undone (“undo”). This process is what applies the changes to get you up to what the database looked like when the transaction log backup was executed. And, while each log row’s redo operation is generally much faster than the original execution (they do not have to do any calculations, etc. as the FINAL version of the changes for the row is what’s in the log), there might be a lot of them!

Q: Do I have to backup the ReportServerTempDB? I’m not a reporting services expert but my understanding is that yes, you do need to backup a ReportServerTempDB. The best thing to read is from the Books Online: Backup and Restore Operations for a Reporting Services Installation.

Q: Will the 2005 Backup wizard allow scripting of the 'scheduling'? The backup command itself will be in the scheduled job but the statement that is used to create the job can only be scripted by scripting the job – not the backup command.

Q: Did you talk about partial database availability? Yes! SQL Server 2000 does not offer this new feature, but SQL Server 2005 does. Partial database availability is a new feature that allows a damaged database to stay online – even while secondary data files are damaged and unavailable.

Q: What is an optional secondary file?An optional secondary file is a data file that is NOT the mdf. Secondary data files are created in larger databases and store user-defined data. They might be used to store different types of data and/or even possibly part of a large table (partitioning). Imagine having 100s of millions of rows – because you have three years of historical data – if year 2003 is in a secondary data file and becomes damaged, it will not require the database to be taken offline.

Q: I have not seen Management Studio. Is this new with SQL Server 2005? Yes, SQL Server Management Studio (SSMS) replaces both Query Analyzer and SQL Server Enterprise Manager – in a more robust UI that also include Solution/Project control and Source Integration. Check out the online labs to get some hands-on lab time in the Virtual Lab environment here.

Q: A FULL BACKUP has the option "remove inactive entries from transaction log" — why doesn't it seem to do this? It would seem from this option that nightly full database backups would keep the log size small? This is a SQL Server Enterprise Manager bug. It should NOT even be an option on that tab. It should be grayed out. If you click around to differential and then back to full on the General Tab, before you go to the Options tab, then you will no longer see that option available. As far as this being something desired – not really… The reason why clearing the log is NOT performed after a full backup (or as part of a full backup) is so that if the full database backup becomes corrupt in some way, you can fall back on your transaction log sequence to successfully restore! Basically, this ensures that nothing ever breaks the continuity of the transaction log!

Q: You recommend Trace Flag 3231, what does it do? Trace Flag 3231 is an undocumented Trace Flag that disables the ability to clear the transaction log in a database that is running in the FULL or BULK_LOGGED recovery models. This significantly reduces the ability for someone to break the continuity of the transaction log… ah, so that would probably be the next question – what else could possibly break the continuity of the transaction log? Changes to the SIMPLE recovery model. It is highly recommended that ONLY changes between FULL and BULK_LOGGED be performed in databases where transaction log backups are relied upon for recovery.

Q: How can we test our backup strategy? How do we test our differential backup content? When we test the differential backup content, do we need to restore the full backup content at the same time?The best test of your backup strategy is a complete restore sequence to a secondary server – which you then follow up by testing your application. There *are* other dependencies outside of the database that could cause your application to fail. However, the restore to the secondary server will ensure that at least your data is accessible. Below are some KB articles that you can review for more details on some of these additional dependencies. To access them, go to http://support.microsoft.com.  Q240872 – INF: How to Resolve Permission Issues When a Database is Moved Between SQL ServersQ246133 – INF: How To Transfer Logins and Passwords Between SQL ServersQ307775 – INF: Disaster Recovery Articles for Microsoft SQL ServerQ224071 – INF: Moving SQL Server Databases to a New Location with Detach/Attach.

Q: I have a database server hosted by a 3rd party on the web … is there a secure, automated way for me to make remote copies of the Database Backup? You'll need to talk to your ISP to see if they support that. More than anything it depends on the interface they give you, the services you've paid for and whether or not you have secure channels over which you can communicate with your ISP.

Q: Is there a way for you to answer these questions – that is – release the question from the question manager so that another question can be asked? I had a question in the "queue" for most of the webcast and because of that, I could not ask additional questions which I had wanted to ask… I ended up deleting my original question just so the question window would allow me to ask the next question… This is a LiveMeeting issue. I’m trying to put just a quick “a” in the response during the webcast but really, it’s an issue with the LiveMeeting Query Manager UI. I REALLY wish that questions just went into the queue and didn’t hold your window as well… I’ll see if we can pass your suggestion on to the LiveMeeting folks!

Thanks again for another GREAT week of questions. See you in the next session…which because this is posted so late, is session 4 – on Friday, August 26th. You can register for this session here: MSDN Webcast: A Primer to Proper SQL Server Development (Part 4 of 10): Best Practices in Indexing (Level 200)

Thanks for reading!

kt

8 thoughts on “MSDN Webcast Q&A: A Primer to Proper SQL Server Development, Part 2 of 10

  1. Hi Kimberly,

    First let me thank you, I’ve become a fan of yours. I really like your posts/articles, I can’t even tell you how much I love spending time reading your stuff.

    A little correction here with regard to the question "Do I have to backup the ReportServerTempDB?". We don’t need to back this database up, it only has transient data in it.
    Thank you once again.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.