Over the past couple of months, SQLskills has embarked on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.
The Importance of Sequential Throughput for SQL Server
A number of very common, important operations that are often executed by SQL Server are potentially performance limited by the sequential throughput of the underlying storage subsystem. These include:
- Full database backups and restores
- Index creation and maintenance work
- Initializing transactional replication snapshots and subscriptions
- Initializing AlwaysOn AG replicas
- Initializing database mirrors
- Initializing log-shipping secondary’s
- Running DBCC CHECKDB
- Relational data warehouse query workloads
- Relational data warehouse ETL operations
Despite this, I often see DBAs having to contend with extremely low sequential performance on their various database servers, to the detriment of their ability to meet their SLAs for things like RPO and RTO (not to mention their sanity). This being the case, what if anything can you do to improve this situation?
One thing you should do is to do some storage subsystem benchmarking with tools like CrystalDiskMark and Microsoft DiskSpd, to find out what the potential performance of each logical drive is on the underlying machine where your SQL Server instance is running.
You can also run some simple queries and tests from SQL Server itself to see what level of sequential performance you are actually getting from your storage subsystem (which is much harder for storage administrators, SAN administrators, and storage vendors to dispute). One example is running a full database backup to a NUL device, to see the ultimate sequential read performance from where your data and log files are located. Another example is running a SELECT query with an index hint to force the query to do a clustered index scan or table scan from a relatively large table.
Note: You should do these kinds of tests during a maintenance window or ideally, before a new instance of SQL Server goes into Production. Otherwise, your testing could negatively affect your Production environment or the other Production activity could skew your test results.
Beyond that, here are some general steps you can take to improve overall storage system performance:
- Make sure you have power management configured correctly at all levels (BIOS power management, hypervisor power policy, and Windows Power Plan)
- Make sure you have Windows Instant File Initialization enabled
- Make sure you are not under memory pressure (to reduce stress on your storage subsystem)
- Make sure you are using the latest version of SQL Server
- Make sure you have installed the latest Service Pack and Cumulative Update for your version of SQL Server
- Favor Enterprise Edition or Standard Edition (because it has better I/O performance)
- Use compression to reduce your I/O requirements (backup compression, data compression, and columnstore indexes
- Make sure your indexes are tuned appropriately (not too many and not too few)
- Keep your index fragmentation under control
You can watch my Pluralsight course SQL Server: Improving Storage Subsystem Performance to get more details about this subject. You can also read my article on SQLPerformance.com, Sequential Throughput Speeds and Feeds to get some more technical details about sequential throughput.
2 thoughts on “SQLskills SQL101: Sequential Throughput”
One quick view of actual I/O performance of your production workloads is to examine the history of how long your backups have been taking. Very simple with dbatools.io PowerShell cmdlet named Measure-DbaBackupThroughput.
You can also query msdb.dbo.backupset to figure out the size and elapsed time of your database backups.