DBCC CHECKDB scalability and performance benchmarking on SSDs

Back in February I ran a bunch of performance tests of DBCC CHECKDB on SSDs, to evaluate the effects of degree-of-parallelism (DOP) and various options and traceflags, and now I’m finally getting around to presenting the results. Make sure to also read the recent┬ápost where I talk about the detrimental effect of computed-column indexes on […]

DBCC CHECKDB performance and computed-column indexes

It’s no secret that DBCC CHECKDB has some performance quirks based on the schema of the database being checked and various kinds of corruptions. I was recently doing some scalability testing of DBCC CHECKDB for a blog post and discovered quite a nasty performance issue that exists in all versions of SQL Server back to […]

Make life easier on yourself, get a baseline!

At the SQL Connections conference earlier this month, at the start of my talk on Making SQL Server Faster, Part 1: Simple Things, I talked about the importance of having a performance baseline so you can measure the effect of any changes made to your environment. A month ago I kicked off a survey about […]

Benchmarking: Multiple data files on SSDs (plus the latest Fusion-io driver)

It's been a long time since the last blog post on SSD benchmarking - I've been busy! I'm starting up my benchmarking activities again and hope to post more frequently. You can see the whole progression of benchmarking posts here. You can see my benchmarking hardware setup here, with the addition of the Fusion-io ioDrive Duo […]

Benchmarking: Introducing SSDs (Part 3: random inserts with wait stats details)

Last time I posted about SSDs I presented the findings from sequential inserts with a variety of configurations and basically concluded that SSDs do not provide a substantial gain over SCSI storage (that is not overloaded) – see this blog post for more details. You can see my benchmarking hardware setup here, with the addition of […]

Benchmarking: Introducing SSDs (Part 2: sequential inserts)

Over the last month we've been teaching in Europe and I haven't had much time to focus on benchmarking, but I've finally finished the first set of tests and analyzed the results. You can see my benchmarking hardware setup here, with the addition of the Fusion-io ioDrive Duo 640GB drives that Fusion-io were nice enough […]

Benchmarking: do multiple data files make a difference?

Many times I'm asked whether having multiple data files can lead to an improvement in performance. The answer, as with all things SQL (except concerning auto-shrink) is a big, fat "it depends." It depends on what you're using the database for, and the layout of the files on the IO subsystem, and the IO subsystem […]

Benchmarking: Introducing SSDs (Part 1b: not overloaded log file array)

In the previous post in the series I introduced SSDs to the mix and examined the relative performance of storing a transaction log on an 8-drive 7.2k SATA array versus a 640-GB SSD configured as a 320-GB RAID-1 array. The transaction log was NOT the I/O bottleneck in the system and so my results showed […]

Benchmarking: Introducing SSDs (Part 1: not overloaded log file array)

Well it's been almost 6 weeks since my last benchmarking blog post as I got side-tracked with the Myth-a-Day series and doing real work for clients :-) In the benchmarking series I've been doing, I've been tuning the hardware we have here at home so I can start running some backup and other tests. In the last […]

Benchmarking: 1-TB table population (part 5: network optimization again)

Blog posts in this series: For the hardware setup I'm using, see this post. For an explanation of log growth and its effect on perf, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. For the increasing performance through separation […]

Benchmarking: 1-TB table population (part 4: network optimization)

Blog posts in this series: For the hardware setup I'm using, see this post. For an explanation of log growth and its effect on perf, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. For the increasing performance through separation […]

New hardware to play with: Fusion-io SSDs

Christmas comes but once a year… really? Then mine just came early on this afternoon's UPS truck. The very nice folks at Fusion-io just sent me two of their fully-loaded top-of-the-line ioDrive Duos with 640GB of solid-state flash memory in each. This is really extra-nice of them because on Dell's Small Business website they're currently […]

Benchmarking: 1-TB table population (part 3: separating data and log files)

Blog posts in this series: For the hardware setup I'm using, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. In the previous post in the series, I optimized the log block size to get better throughput on the transaction […]

Benchmarking: 1-TB table population (part 2: optimizing log block IO size and how log IO works)

(For the hardware setup I'm using, see this post. For the baseline performance measurements for this benchmark, see this post.) In my previous post in the series, I described the benchmark I'm optimizing – populating a 1-TB clustered index as fast as possible using default values. I proved to you that I had an IO bottleneck […]

Benchmarking: 1-TB table population (part 1: the baseline)

(For the hardware setup I'm using, see this post.) As part of my new benchmarking series I first wanted to play around with different configurations of data files and backup files for a 1-TB database to see what kind of performance gains I can get invoking the parallelism possible when backing up and restoring the database. […]

Interesting case of watching log file growth during a perf test

I'm running some performance tests on the hardware we have (more details on the first of these tomorrow) and I was surprised to see some explosive transaction log growth while running in the SIMPLE recovery model with single row insert transactions! Without spoiling tomorrow's thunder too much, I've got a setup with varying numbers of […]

Benchmarking hardware setup

It's been a few weeks since my last posts but I've got a bunch in the pipeline coming up. Firstly, I've got it together to start using the hardware we got a while back. I'm going to be doing some benchmarking, perf testing and playing with various HA technologies, and of course blogging a bunch about what […]