MSDN Webcast Q&A: A Primer to Proper SQL Server Development
Creating a Recoverable Database, Part 1 of 10
Presented by Kimberly L. Tripp, SQLskills.com
Q: Can I view a recording of this webcast? Here’s the specific Replay Link: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032278585&Culture=en-US
Q: Where can we get the demo scripts? The demo scripts are in this zip (20050805 MSDN Webcast ScalableSys01.zip (5.47 KB)); here in this blog entry. However, I will also create an entry www.SQLskills.com under Past Event Resources for the entire webcast series.
Q: Does Kimberly L. Tripp have a blog? Yes, www.SQLskills.com/blogs/Kimberly and if you’re reading this on my blog…well, then you already know! J
Q: How about running on a VMware Virtual Machine? Sure.
Q: Is the placement of data and log important when using a disk array or SAN? It’s important to make sure that you don’t have bottlenecks at the physical disk. You should see if your SAN software supports monitoring at the disk level and if so, consider reconfiguring if bottlenecks occur. It’s harder to say that a SAN will have the same problems that you would have with direct attached storage since SANs often place larger logical disks on more than one physical disks (by using 9, 18 or 36 GB chunks from a variety of physical drives). The main point is that SANs offer a lot of caching and other optimizations, so you don’t often have to worry as much. However, I would strongly suggest getting to know your SAN config as well as work with your SAN vendor to really get an understanding of how the resources are allocated as well as used. Also, make sure you’re current with all bios, firmware, drivers, etc.
Q: Where will be the demo code available? What version of SQL Server will you be using? All demos were done on SQL Server 2005; however, for this (and even for many others), a lot of the demo code works on both SQL Server 2000 as well as SQL Server 2005. At a minimum, many of the concepts apply and where a feature is new or only supported on one version, it will be pointed out. For example, fast file initialization is on SQL Server 2005 only.
Q: Does SQL Server support log mirroring (i.e. dual logging)? No. A very old version did (I think 4.2 on OS/2 did) but the key problem was performance. You are a lot better off letting the hardware handle the mirroring rather than software talking to os talking to hardware.
Q: Do Secondary files allow partial backups? Not really sure I follow this question but…if you’re asking whether or not you can perform a backup of just a part of the database – yes! In fact, any file can be backed up independently of the filegroup of which it’s a member (there was a restriction in 7.0 that didn’t allow this but as of SQL Server 2000, files can be backed up at any time). If you’re really inrerested in file and filegroup backups, I wrote a couple of articles for SQL Server Magazine and you can find the complete list here: https://www.sqlskills.com/articles.asp
Q: What is a page? Generally speaking a page is a unit of storage. Specifically speaking with regard to to SQL Server, a page is 8K in size and is the smallest unit of I/O in terms of data. When a table is stored on disk, it is stored in 8K chunks (and most of the time SQL Server allocates 8 – 8K chunks to objects). A 64KB block of the database is called an extent. SQL Server allocates extents once an object reaches a minimum size (which is also 64KB) to try to keep an object more contiguous.
Q: Is caching user specific or server specific. If two users are accessing same table, does it create two separate pages in cache or just one? Well, data cache is not user specific (at least not with regard to your question). However, there are a few things that would be local in scope – like a user-defined temporary table… However, a database table being accessed by many users, would only have one set of pages in cache.
Q: Why would a user of the database do a checkpoint? Users cannot perform checkpoints, only database owners and administrators can force a checkpoint. Generally speaking, it is not often that a checkpoint needs to be force. SQL Server controls and handles the checkpoint automatically.
Q: Wouldn't the engine know the best time to do a checkpoint? Yep! And it does. The default setting for checkpoint is “0” minutes. Meaning – SQL Server decides!
Q: How does page caching function during the 'redo' process? Same as it would normally. Data being “redone” is loaded into cache and changed – based on what’s held in the transaction log. (Keep reading, more in the next question/answer.)
Q: So does the transaction log contain all the information about say an update… i.e. what columns, what data? Yes, for the most part what is contained in the transaction log is the “after” version of the modifications so that log rows do not need to execute any functions or other code. The idea is that log rows can be processed extremely quickly – but have sufficient information to make sure that the data is modified properly. (And keep reading, more in the next question/answer.)
Q: When you say transaction is re-done, what exactly happens? Does SQL Server automatically take values again from cache and copy to the disk? What if we loose the cache also during the process? Then redo will begin again when the system restarts…Generally, the process is – go to the log, redo, undo, checkpoint and the information is not deemed inactive until after it checkpoints. So, even if you had the equivalent to “truncate log on checkpoint” this information would not be lost in the midst of restart recovery (redo) because it’s still active until it’s checkpointed!
Q: Is Roll back notification done asynchronously or is another client request required, e.g. long running client with long SQL call intervals…I’m not sure if I follow this exactly BUT I think you’re wondering how you would know that your transaction has been rolled back? The key way that many applications know – is that they never got actual confirmation and maybe they’ve lost their connection and just timed out. But, if this doesn’t answer your question…send me mail!
Q: Can you pre-grow a database at non-peak times grow the database when space starts getting low? Sure, there are a few options really. One way, create a SQL Agent Job that checks space allocation at 2am and if it’s getting close to full – execute a manual increase in space. With a bit of dynamic string execution I think you could get this to be a very flexible and easily automated process!
Q: Best drive configuration for data, logs, indexes? Well, typically, I like to separate data by access pattern and type rather than data from indexes. Typically, I like RO v. RW v. a single large table v. LOB Data.
Q: I have a 24/7 SQL Server production environment with a database that's well over 300 GB – when do I defrag it? You really have a couple of options to defrag a table – truly defrag (and only defrag it) or rebuild it (which does a lot more than just defrag). To make the answer even more clear – defrag often as it doesn’t take the table offline (in 2000). Rebuild the table when/if you can afford downtime against that table (in SQL Server 2000, a rebuild requires that the table be either read-only [when rebuilding a non-clustered index] or completely inaccessible [when rebuilding the clustered index]). So, now this gets a bit harder to answer! Check out the webcast on SQL Server 2000 – Index Fragmentation Best Practices here.
Q: If you have multiple data file, can you merge back to a single data file? Only when the files are a member of the same filegroup. If/when you want to do this you have two steps: DBCC SHRINKFILE(file_to_remove_logical_name, EMPTYFILE) This will empty the contents of the file into the other files within the same filegroup. Once emptied, use:ALTER DATABASE dbnameREMOVE FILE file_to_remove_logical_name
Q: What is zero initialization?Where the entire contents of the file are zero’ed out. This is done for security reasons.
Q: Doesn’t truncateonly make us lose continuity between transaction backups? First – this was with regard to my recommendation to use DBCC SHRINKFILE with the TRUNCATEONLY option NOT using with TRUNCATE_ONLY on a transaction log backup… So, specifically, NO. DBCC SHRINKFILE with the TRUNCATEONLY option does not break the continuity of the transaction log. Now – just to add a bit of irony here… BACKUP LOG with TRUNCATE_ONLY no longer breaks the continuity of the transaction log in SQL Server 2005. In fact, BOTH the TRUNCATE_ONLY and the NO_LOG options have been changed to ONLY perform a CHECKPOINT. In a database running in the FULL or BULK_LOGGED Recovery Model, this will have NO real impact on the transaction log. In a database running in the SIMPLE Recovery Model, this will execute a checkpoint and the database setting of simple truncates the inactive portion of the transaction log when a checkpoint occurs.
Q: I've read the BOLs about faster performance when putting files on raw partitions? Hmm… I’d love to see the reference. They might have said that raw partitions may offer performance benefits but I would generally doubt it. More importantly, you’d lose other key features if you didn’t use NTFS – like Database Snapshots. So, my main point – don’t use raw partitions! Even if they did offer a performance gain, what you’d lose isn’t generally worth AND I can get better gains elsewhere (indexing, optimizing procedural code, etc.)
Q: Could you please tell us quickly why raid 1+0 is better? Basically, RAID 1+0 offers better availability than RAID 0+1 because it can tolerate the loss of more than one drive. If a drive in a RAID 1 array is lost, all other drives still function. If a drive in a RAID 0 array is lost, all other drives in the RAID 0 array stop functioning. Here’s a good link to review more about different RAID configurations: http://www.raidarray.com/04_00.html
Q: I recently came across a 200meg db with a 50GB log. We tried everything to truncate it but finally could only fix it by changing to a SIMPLE recovery model. What is the most likely culprit for such an outrageous log growth? Well, I wish I could say I hadn’t seen this… But, it’s due to autogrowth and it’s due to running in the full recovery model without performing transaction log backups (but you are performing database backups). Now, the reason why you had so much trouble with it – was because it was horribly fragmented (because of the autogrowth defaults). So, the main point here – if you follow the steps of the VLF Optimization content from the session, you would have also solved the problem. I’d at least check your current fragmentation and make sure that everything is fixed!
Q: What is a "long" transaction (seconds, minutes, or hours?) There is no specific time but the longer a transaction is – especially relative to transaction log backups – the more possible larger growth of the log and less control.
Q: Will *you* be the one doing parts 2-10? Yep… I’m a glutten for punishment. Just kidding! To be honest, I really enjoy these webcasts!!! See you in the next one.
Q: Should SQL Server developers invest the time to learn Microsoft's Enterprise Library (the Application Blocks thing)? OK, I checked around a bit on this and the general consensus is that developers should be aware of all of the resources that exist. It’s by no means an absolute but there are quite a few *very* useful resources there. Review things lightly to see if anything seems useful to your application and then dig deeper if/when they do.
Q: Should the SQL Server Disks be defragged at the os layer with disk tools to defrag? Sure. It doesn’t hurt and can help. The only negative is that you must shutdown SQL Server in order to do so. The good news, once defragged you don’t really need to do it again (unless you have a lot of autogrowth and other files on the disks).
Q: I had a lot of problems convincing my clients to create a new instance of SQL in their servers. The objective was to isolate our systems from theirs. What are the pros and cons of creating an additional instance against adding more DBs to an existing one? Well, you’re correct that it’s mostly an isolation issue. A big concern in some SQL Server environments is the access to metadata that everyone has. In SQL Server 2000, logins can see that other databases and other logins exist – even if they can’t use them. This presents a security concern. In SQL Server 2005, metadata is restricted so that this doesn’t happen.
Finally, here is the link to the Part 2 Registration page.
See you in Part 2,