SQLskills holiday gift to you: all 2013 Insider videos

As we all wind down for the 2014 holiday season, we want to give the SQL Server community a holiday gift to say ‘thank you’ for all your support during 2014, and what better gift than more free content?!

As many of you know, I publish a bi-weekly newsletter to more than 11,500 subscribers that contains an editorial on a SQL Server topic, a demo video, and a book review of my most recently completed book. We’re making all the 2013 demo videos available  so everyone can watch them - 24 videos in all, WMV format only. I did the same thing the last few years for the 2012 videos and 2011 videos.

Here are the details:

  1. January 2013: Recreating missing log files on attach (from Pluralsight) (video | demo code)
  2. January 2013: Using the sys.dm_db_stats_properties DMV (video | demo code)
  3. February 2013: Using Microsoft Data Link (video)
  4. February 2013: Linked servers and statistics (video | demo code)
  5. March 2013: Using the system_health Extended Event session (video | demo code)
  6. March 2013: Moving from SQL Trace to Extended Events (video | demo code)
  7. April 2013: Color coding and other SSMS features (video)
  8. April 2013: DISTINCT aggregate improvements (video | demo code)
  9. May 2013: Using the tsql_stack Extended Event action (from Pluralsight) (video | demo code)
  10. May 2013: Deferred drop behavior (from Pluralsight) (video | demo code)
  11. June 2013: Finding duplicate statistics (video | demo code)
  12. June 2013: Using data compression (video | demo code)
  13. July 2013: Undetectable performance problems (video | demo code)
  14. July 2013: SSMS Object Explorer features (video)
  15. August 2013: Parallel crash recovery (from Pluralsight) (video | demo code)
  16. August 2013: Tracking tempdb space usage (video | demo code)
  17. September 2013: Enabling instant file initialization (video | demo code)
  18. September 2013: Using query hashes (video | demo code)
  19. September 2013: Recovering from data purity corruptions (from Pluralsight) (video | demo code)
  20. October 2013: Implicit conversions (from Pluralsight) (video | demo code)
  21. October 2013: Extended Events templates (video | demo code)
  22. November 2013: Using the missing index DMVs (video | demo code)
  23. November 2013: Using older backups to retrieve data after corruption (from Pluralsight) (video | demo code)
  24. December 2013: Enabling database mail (video)

If you want to see the 2014 videos before next December, get all the newsletter back-issues, and follow along as the newsletters come out, just sign-up at http://www.SQLskills.com/Insider. No strings attached, no marketing or advertising, just free content.

Happy Holidays and enjoy the videos!

Problems from having lots of server memory

A month ago I kicked off a survey asking how much memory is installed on your largest server that’s running SQL Server. Thank you to everyone that responded.

Here are the results:

memory Problems from having lots of server memory

 

 

 

 

 

 

 

 

 

 

 

The “other” values are:

  • 3 more for the ’128 GB or more, but less than 256 GB’ count
  • 1 more for the ‘Less than 16 GB’ count
  • One poor soul who only has 512 MB in their server!

This is very interesting:

  • I expected the majority of servers to fall into the middle of the range (around 128GB), and it’s actually only 37% that fit into the 64 GB to 256 GB range.
  • I’m surprised at the percentage of servers (41%) of servers with 256 GB or more.
  • I didn’t know what percentage would have more than 1 TB, so almost 10% is really cool to see.

So what do these results mean? Well, the number of servers out there with lots (more than 128GB) of memory is more than half of all respondents. The more memory you have, the more important it is that you make sure that the memory is being used efficiently and that you’re not wasting space in the buffer pool (see here) and that you’re not churning the buffer pool with poor query plans causing lots of reads (see here).

What other things could be problems with large amounts of memory?

  • Shutting down the instance. This will checkpoint all the databases, which could take quite a long time (minutes to hours) if suddenly all databases have lots of dirty pages that all need to be flushed out to disk. This can eat into your maintenance window, if you’re shutting down to install an SP or a CU.
  • Starting up the instance. If the server’s POST checks memory, the more memory you have, the longer that will take. This can eat into your allowable downtime if a crash occurs.
  • Allocating the buffer pool. We’ve worked with clients with terabyte+ buffer pools where they hit a bug on 2008 R2 (also in 2008 and 2012) around NUMA memory allocations that would cause SQL Server to take many minutes to start up. That bug has been fixed in all affected versions and you can read about in KB 2819662.
  • Warming up the buffer pool. Assuming you don’t hit the memory allocation problem above, how do you warm up such a large buffer pool so that you’re not waiting a long time for your ‘working set’ of data file pages to be memory resident? One solution is to analyze your buffer pool when it’s warm, to figure out which tables and indexes are in memory, and then write some scripts that will read much of that data into memory quickly as part of starting up the instance. For one of the same customers that hit the allocation bug above, doing this produced a big boost in getting to the steady-state workload performance compared to waiting for the buffer pool to warm up naturally.
  • Complacency. With a large amount of memory available, there might be a tendency to slacken off proactively looking for unused and missing index tuning opportunities or plan cache bloat or wasted buffer pool space (I mentioned above), thinking that having all that memory will be more forgiving. Don’t fall into this trap. If one of these things becomes such a problem that it’s noticeable on your server with lots of memory, it’s a *big* problem that may be harder to get under control quickly.
  • Disaster recovery. If you’ve got lots of memory, it probably means your databases are getting larger. You need to start considering the need for multiple filegroups to allow small, targeted restores for fast disaster recovery. This may also mean you need to think about breaking up large tables, using partitioning for instance, or archiving old, unused data so that tables don’t become unwieldy.

Adding more memory is one of the easiest ways to alleviate some performance issues (as a band-aid, or seemingly risk-free temporary fix), but don’t think it’s a simple thing to just max out the server memory and then forget about it. As you can see, more memory leads to more potential problems, and these are just a few things that spring to mind as I’m sitting in the back of class here in Sydney.

Be careful out there!

 

T-SQL Tuesday: Giving back in 2015

TSQL2sDay150x150 T SQL Tuesday: Giving back in 2015

 

 

 

 

 

My friend Wayne Sheffield (b|t)is hosting T-SQL Tuesday this month and it’s been a long time since I’ve taken part (wow – since January 2011!). His theme is about how you’re going to give back or continue giving back to the SQL Server community in 2015. I’m going to talk about stuff we at SQLskills do in the community, in the spirit of the T-SQL Tuesday theme, not as any kind of marketing or self-aggrandizement.

We do a lot (I think) for the SQL Server community, both because we’re just nice like that :-), and because we appreciate the people in the community as they provide our livelihood. To be honest, for me it’s mostly because I like helping people with SQL Server problems. It’s like an irresistible urge when I see someone with a problem I know how to help with.

We’re going to continue with these things in 2015:

  • Helping out on the fantastic #sqlhelp alias on Twitter
  • Blogging on all our blogs
  • Publishing our bi-weekly newsletter with a demo video and editorial (see the link on the RHS of your screen)
  • Glenn publishes his monthly DMV queries for all versions of SQL Server from 2005 onward
  • Erin helps run the Cleveland User Group

And we’re also going to be doing a lot more:

  • Remote user groups. I had this mad idea to do remote user group sessions for anyone that asks – so we’ve currently got 49(!) remote user group sessions scheduled in 2015 at user groups in the US, UK, Ireland, Belgium, Netherlands, New Zealand, South Africa, Israel, Australia, Canada, Poland, and PASS virtual chapters. I’m personally doing 20 of them. Check if your user group has signed up, and if you’re in a country not listed above definitely hit us up for a session. See here for details.
  • I’m putting together a comprehensive web encyclopedia of all wait and latch types that exist, across all versions, slated to go live in January/February. Shoot me an email if you’d like to help provide data (involves installing debug symbols and XEvent sessions on your prod server). That’s going to be cool!
  • Kimberly and I have another special project coming up… more details in January/February

As Wayne says in his blog post, I encourage you to give back to the community. We’ve got one of the best technical communities in the world – I know we’re the envy of many others.

Thank you to everyone who blogs, tweets, runs/speaks at/attends user groups/SQL Saturdays/PASS – long may we all continue!

Cheers

October 2015 Dublin IE2/IEPTO-2 class open for registration

Through popular demand we’ve managed to juggle a bit more of our schedule around and found space to fit in another European class in 2015, and it’s open for registration!

We’ll be teaching our signature IEPTO-2 (formerly IE2) Immersion Event on Performance Tuning and Optimization, in partnership with our great friends Bob and Carmel Duffy of Prodata. This is the first time we’ve been back in Ireland since 2010.

The class will be October 12-16, and there are multiple discounts available depending on when you register:

  • Special Price (Before January 31st 2015) €2,195
  • Early Bird (Before June 30th 2015) €2,395
  • Full Price (After June 30th 2015) €2,795

You can get all the details on the class page here.

We hope to see you there!

Paul’s awesome ribs recipe

On Friday I made my signature pork ribs for our good friend Bob Ward (along with Kimberly’s truffled cauliflower mash and caramelized broccoli) and I promised to blog the ribs recipe. I’ve been making ribs like this for about ten years – they’re dry and you have to tear the meat off the ribs with your teeth, rather than the usual American ribs where the meat falls off on its own. I don’t remember which BBQ recipe book I originally got the recipe from but it’s evolved over the years into what’s below.

Let me know if you try it – enjoy!

Ingredients:

  • 1 large rack of baby-back pork ribs
  • 1 full bulb of garlic
  • 1/2 cup soy sauce
  • 1/4 cup cream sherry (e.g. Sheffield)
  • 2 tablespoons honey

Prepare the ribs: wash the rack and cut it into individual ribs. Arrange in a deep rectangular dish (e.g. Pyrex or Le Creuset) on their side – layed out like ( ( ( ( (. Optional: discard weird-shaped or extra pieces of meat.

Prepare the sauce: pour the soy sauce into a Pyrex measuring jug. Pour the sherry in too. Pour the honey into the mixture. It’ll settle at the bottom of the jug. Peel the garlic (vary the amount based on your garlic-loving preference) and use a garlic press (I use this one) to get the ‘meat’ of each garlic glove into the sauce jug. Discard the skin of each garlic glove.

Marinate the ribs: Mix up the sauce with a small fork to get the honey mixed in. Pour the sauce evenly over the ribs. Cover the rib dish with cling wrap and let it sit in the fridge for 4-24 hours.

Prepare to cook: set the oven to broil and set the top oven shelf to be one level down from the very top. Line a deep rectangular baking/broiling pan with aluminum foil. Arrange the ribs on the side on the foil in two rows like ( ( ( ( (.

Broil: broil the ribs as prepared for 5 minutes. Remove from the oven and turn the ribs over in place. Drizzle the remains of the sauce and garlic over each rib using a small spoon. Broil for 4 more minutes. Remove from the oven.

Bake: after broiling the ribs the second time, set the same over to bake at 325F. Turn the ribs a second time and drizzle any remaining sauce on them. Bake on the bottom shelf for 20 minutes. Increase heat to 350F and bake for 10 more minutes. Increase heat to 375F and bake for 10 more minutes. You want the outside of the ribs to be blackening and caramelizing.

Remove and place in a serving dish. Eat immediately with your hands.

For garlic lovers, get a teaspoon and scoop the concentrated garlic mush and sauce from the bottom of the baking pan – yummmmm!

Extra ribs can be eaten cold over the next few days or reheated by baking for 5-8 minutes at 375.

Delayed Durability in SQL Server 2014

One of the cool new features in SQL Server 2014 is delayed durability (available in all Editions), which is described in detail in Books Online here.

I think I’m going to see a lot of people turn this on, as you can get a profound increase in transaction throughput with the right workload. However, I also think a lot of people are going to turn this on without realizing the potential for data loss and making the appropriate trade off.

Why can it give a throughput boost?

I put together a contrived workload with a small table where 50 concurrent clients are updating the same rows, and the database log is on a slow I/O subsystem. Here’s a graph showing my test:

delayed Delayed Durability in SQL Server 2014

At the obvious change point, that’s where I enabled delayed durability, with all transactions being forced to use it. Before the change, the number of Transactions/sec is equal to the number of Log Flushes/sec, as each transaction is holding locks that block all other transactions (I told you it’s a contrived workload). So why the profound jump in Transactions/sec when I forced delayed durability?

Under normal circumstances, when a transaction commits, the commit doesn’t complete until the log block (see this blog post for more details) containing the LOP_COMMIT_TRAN log record for the transaction has been flushed to disk and the write is acknowledged back to SQL Server as having completed, providing the durability of the transaction (the D in the ACID properties of the transaction). The transaction’s locks cannot be dropped until the log flush completes.

In my workload, all the other transactions are waiting for the one that is committing, as they all need the same locks, so Transactions/sec is tied to Log Flushes/sec in this case.

With delayed durability, the transaction commit proceeds without the log block flush occurring – hence the act of making the transaction durable is delayed. Under delayed durability, log blocks are only flushed to disk when they reach their maximum size of 60KB. This means that transactions commit a lot faster, hold their locks for less time, and so Transactions/sec increases greatly (for this workload). You can also see that the Log Flushes/sec decreased greatly as well, as previously it was flushing lots of tiny log blocks and then changed to only flush maximum-sized log blocks.

Note:

  • I was forcing all transactions to be delayed durable, but the facility exists to make the delayed durability choice per transaction too (see Books Online for more details).
  • There’s a bit more to the log block flushing too: under delayed durability, a log block will flush when it fills up, or if a non-delayed durable transaction commits, or if the new sp_flush_log proc is executed.

My good friend Aaron Bertrand over at SQL Sentry has a long post about delayed durability that looks into its performance implications in a little bit more depth so I recommend you check out his post as well.

So this looks great, for the right type of workload. But I bet you’re thinking:

What’s the catch?

Your transactions aren’t durable when they commit. Simple.

Now you may be thinking that if the system crashes, the most you’re going to lose is up to 60KB of transaction log. Wrong. If that last log block contains the LOP_COMMIT_TRAN log record for a long-running transaction, when the system crashes, and that log block isn’t on disk, that whole transaction will roll back during crash recovery. So the potential for work/data loss is greater than just 60KB.

And there’s more:

  • Log backups will not back up that unflushed log block, as it’s not on disk, so non-durable committed transactions may not be contained within a log backup.
  • Non-durable transactions that have committed are not protected by synchronous database mirroring or a synchronous availability group either, as these rely on log block flushes (and transmission to the mirror/replica).

For critical transactions, an sp_flush_log can be used, or per-transaction delayed durability used instead.

So the million-dollar question is:

Should I enable delayed durability?

It depends. Is your business comfortable making the throughput vs. durability trade off? Does enabling it give a throughput boost? If yes to both, go ahead. If no to either, don’t enable it. That’s a very simplistic way of thinking about it, but that’s what it boils down to really.

There are lots of other things you can do to increase the throughput and performance of the transaction log, and I explained them in a blog post series:

As I stated above though, I think a lot of people are going to be wowed by the throughput boost (if their workload benefits) from delayed durability and see this as a no-brainer, without considering the potential for data loss.

Tempting, isn’t it?

Bug: Error: 3449 and server restart during DBCC CHECKDB

This is a quick post to let you know about a bug that a few people are hitting when running DBCC CHECKDB. The symptoms are a series of errors in the error log plus SQL Server forcibly shuts itself down and restarts. I’ve heard of people hitting the bug on SQL Server 2014 and SQL Server 2012 SP1. Microsoft has confirmed to me that it’s a bug and they’re “investigating a fix for the issue.”

The error log symptoms look something like I show below:

2014-11-18 09:04:15.69 spid64      The operating system returned error 665(The requested operation could not be completed due to a file system limitation) to SQL Server during a write at offset 0x00001EE6FD8000 in file 'C:\SQLskills\\Company_file2.ndf:MSSQL_DBCC23'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
2014-11-18 09:04:15.72 spid64      Error: 3314, Severity: 17, State: 3.
2014-11-18 09:04:15.72 spid64      During undoing of a logged operation in database 'Company', an error occurred at log record ID (887654:3321:14). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.
2014-11-18 09:04:34.38 spid64      Error: 831, Severity: 20, State: 1.
2014-11-18 09:04:34.38 spid64      Unable to deallocate a kept page.
2014-11-18 09:04:34.40 spid64      Error: 3449, Severity: 21, State: 1.
2014-11-18 09:04:34.40 spid64      SQL Server must shut down in order to recover a database (database ID 23). The database is either a user database that could not be shut down or a system database. Restart SQL Server. If the database fails to recover after another startup, repair or restore the database.

My speculation: the 665 error is from the snapshot file that DBCC CHECKDB creates hitting an NTFS limitation on the number of file fragments in a sparse file. This causes the snapshot creation to fail. The failure causes the undo of a log record in the snapshot to fail (remember that a database snapshot undergoes crash recovery to make it transactionally consistent). This failure then leads to SQL Server thinking it has to forcibly restart to recover the snapshot database, which is should never do for a snapshot – that seems to be the bug IMHO.

I’ll update this post as soon as I hear about a fix and the builds that the fix is in.

I think this is a rare bug to hit, it seems IMHO to be a regression (from builds people are reporting), and you can help yourself to avoid it by:

  • Creating your own database snapshot, on a volume without file-system free space fragmentation, and running DBCC CHECKDB against the snapshot
  • OR, trying to run DBCC CHECKDB when there isn’t a significant amount of change occurring in the database, so the database snapshot doesn’t have to become very large
  • OR, using the backup-copy-restore-check method of running DBCC CHECKDB on a restored backup of the database on another server

If this bug has happened to you, please leave a comment with the build number you’re using and I’ll pass on the information to Microsoft.

Thanks

Calling all user group leaders! We want to present for you in 2015!

Starting in 2015 we at SQLskills would love to present remotely for your user group, anywhere in the world. Kimberly and I recently did a series of remote presentations from Redmond, WA, US to Australia and New Zealand, including two-way video in some cases, and the bandwidth worked great. It’s not feasible for us to travel to user groups or SQL Saturdays unless we’re already in that particular city, but remote presentations are easy to do and are becoming more and more popular. This way we can spread the community love around user groups everywhere that we wouldn’t usually get to in person.

All SQL Server user groups can get Lync accounts for free and that’s what we used for our presentations. If you have some other mechanism you usually use, we can accommodate that too.

Calling all user group leaders! If you’d like one of us (me, Kimberly, Jon, Erin, Glenn) to present remotely for you in 2015 (or maybe even multiple times), send me an email with:

  • Details of which user group you represent
  • The usual day of the month, time, and timezone of the user group
  • Which months you have available, starting in January 2015

And I’ll let you know who’s available with what topics so you can pick. We’ll also send you some swag to give out to attendees.

What’s the catch? There is no catch. We’re just stepping up our community involvement next year and we all love presenting :-)

And we’re serious about anywhere in the world – doesn’t matter what the timezone difference is – we’ll make it work.

Edit on November 25th: FYI – it’s filling up fast as we’ve already heard from more than 45 user groups! Second Tuesday of the month has gone all year.

We’re really looking forward to engaging with you all!

Cheers

PS By all means pass the word on to any SharePoint and .Net user group leaders you know too.

Capturing IO latencies for a period of time

In both my wait statistics pre-conference workshops at the PASS Summit and SQLintersection I promised to do a bunch of blog posts. The second one on the list is a simple script to allow you to capture all the reads, writes, and I/O latencies that occurred over a period of time.

The script does the following:

  • Creates two temporary tables
  • Captures the output from sys.dm_io_virtual_file_stats into the first table
  • Waits for a configurable delay (line 41 in the script – I made it 30 minutes in the example)
  • Captures the output from sys.dm_io_virtual_file_stats into the second table
  • Provides my usual virtual file stats output on the results

The original code and explanation about using the DMV are here.

Enjoy!

/*============================================================================
  File:     ShortPeriodIOLatencies.sql

  Summary:  Short snapshot of wait stats

  SQL Server Versions: 2005 onwards
------------------------------------------------------------------------------
  Written by Paul S. Randal, SQLskills.com

  (c) 2014, SQLskills.com. All rights reserved.

  For more scripts and sample code, check out

  For more scripts and sample code, check out http://www.SQLskills.com

  You may alter this code for your own *non-commercial* purposes (e.g. in a
  for-sale commercial tool). Use in your own environment is encouraged.
  You may republish altered code as long as you include this copyright and
  give due credit, but you must obtain prior permission before blogging
  this code.

  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
============================================================================*/

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] = N'##SQLskillsStats1')
    DROP TABLE [##SQLskillsStats1];

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] = N'##SQLskillsStats2')
    DROP TABLE [##SQLskillsStats2];
GO

SELECT [database_id], [file_id], [num_of_reads], [io_stall_read_ms],
	   [num_of_writes], [io_stall_write_ms], [io_stall],
	   [num_of_bytes_read], [num_of_bytes_written], [file_handle]
INTO ##SQLskillsStats1
FROM sys.dm_io_virtual_file_stats (NULL, NULL);
GO

WAITFOR DELAY '00:30:00';
GO

SELECT [database_id], [file_id], [num_of_reads], [io_stall_read_ms],
	   [num_of_writes], [io_stall_write_ms], [io_stall],
	   [num_of_bytes_read], [num_of_bytes_written], [file_handle]
INTO ##SQLskillsStats2
FROM sys.dm_io_virtual_file_stats (NULL, NULL);
GO

WITH [DiffLatencies] AS
(SELECT
-- Files that weren't in the first snapshot
        [ts2].[database_id],
        [ts2].[file_id],
        [ts2].[num_of_reads],
        [ts2].[io_stall_read_ms],
		[ts2].[num_of_writes],
		[ts2].[io_stall_write_ms],
		[ts2].[io_stall],
		[ts2].[num_of_bytes_read],
		[ts2].[num_of_bytes_written]
    FROM [##SQLskillsStats2] AS [ts2]
    LEFT OUTER JOIN [##SQLskillsStats1] AS [ts1]
        ON [ts2].[file_handle] = [ts1].[file_handle]
    WHERE [ts1].[file_handle] IS NULL
UNION
SELECT
-- Diff of latencies in both snapshots
        [ts2].[database_id],
        [ts2].[file_id],
        [ts2].[num_of_reads] - [ts1].[num_of_reads] AS [num_of_reads],
        [ts2].[io_stall_read_ms] - [ts1].[io_stall_read_ms] AS [io_stall_read_ms],
		[ts2].[num_of_writes] - [ts1].[num_of_writes] AS [num_of_writes],
		[ts2].[io_stall_write_ms] - [ts1].[io_stall_write_ms] AS [io_stall_write_ms],
		[ts2].[io_stall] - [ts1].[io_stall] AS [io_stall],
		[ts2].[num_of_bytes_read] - [ts1].[num_of_bytes_read] AS [num_of_bytes_read],
		[ts2].[num_of_bytes_written] - [ts1].[num_of_bytes_written] AS [num_of_bytes_written]
    FROM [##SQLskillsStats2] AS [ts2]
    LEFT OUTER JOIN [##SQLskillsStats1] AS [ts1]
        ON [ts2].[file_handle] = [ts1].[file_handle]
    WHERE [ts1].[file_handle] IS NOT NULL)
SELECT
	DB_NAME ([vfs].[database_id]) AS [DB],
	LEFT ([mf].[physical_name], 2) AS [Drive],
	[mf].[type_desc],
	[num_of_reads] AS [Reads],
	[num_of_writes] AS [Writes],
	[ReadLatency(ms)] =
		CASE WHEN [num_of_reads] = 0
			THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,
	[WriteLatency(ms)] =
		CASE WHEN [num_of_writes] = 0
			THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,
	/*[Latency] =
		CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
			THEN 0 ELSE ([io_stall] / ([num_of_reads] + [num_of_writes])) END,*/
	[AvgBPerRead] =
		CASE WHEN [num_of_reads] = 0
			THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,
	[AvgBPerWrite] =
		CASE WHEN [num_of_writes] = 0
			THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,
	/*[AvgBPerTransfer] =
		CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
			THEN 0 ELSE
				(([num_of_bytes_read] + [num_of_bytes_written]) /
				([num_of_reads] + [num_of_writes])) END,*/
	[mf].[physical_name]
FROM [DiffLatencies] AS [vfs]
JOIN sys.master_files AS [mf]
	ON [vfs].[database_id] = [mf].[database_id]
	AND [vfs].[file_id] = [mf].[file_id]
-- ORDER BY [ReadLatency(ms)] DESC
ORDER BY [WriteLatency(ms)] DESC;
GO

-- Cleanup
IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] = N'##SQLskillsStats1')
    DROP TABLE [##SQLskillsStats1];

IF EXISTS (SELECT * FROM [tempdb].[sys].[objects]
    WHERE [name] = N'##SQLskillsStats2')
    DROP TABLE [##SQLskillsStats2];
GO

Survey: how much server memory on your largest machine?

[Edit 12/10/14: Survey is closed see the editorial here.]
One of the trends I’m seeing this year is a slow increase in the number of people who have servers with very large amounts of memory – so time for a survey!

I’d like to know how much memory is installed on your largest server that runs SQL Server (not what max server memory is set to, how much physical memory is installed).


I’ll editorialize the results in a week or two.

Thanks!

PS If you can’t see the vote button, it’s a browser issue so please try a different one or just hit ‘Enter’ after making a selection and it should register your vote.