Windows Disk Cleanup

One very useful tool for freeing up disk space is the built-in Disk Cleanup application in Windows. You can start this application by simply typing Disk Cleanup at your Windows Start menu. Once it starts, you will need to pick which drive you want to scan and clean up. In most cases, you will want to choose your boot drive, which will be C: for most people.


image

Figure 1: Disk Cleanup: Drive Selection Screen


image

Figure 2: Initial Disk Cleanup Screen


You should always click on the “Clean up system files” button to get a complete picture (and find the larger files that you may be able to delete). After clicking on the button, you’ll have to wait for the scan to complete. If you have a magnetic drive, this might take a while, while if you have an Intel Optane 900P drive, it will be extremely quick!


image

Figure 3: Disk Cleanup after clicking on Cleanup system files

In many cases, you will find an entry for “Previous Windows installation(s)” that may be 25-50GB in size, which is pretty significant when you have a smaller boot drive and are running low on disk space. In Windows 10, you will get those when you install the semi-annual updates such as the recent “Windows 10 April 2018” update.

You just need to be aware that if you let Disk Cleanup delete the previous Windows installations, you won’t be able to go back to a previous version by uninstalling the latest version. The tool will display a special confirmation dialog to make sure you know this.

If you choose to delete everything that the tool offers up, it is not uncommon to get 50-75GB of space back. Again, the disk cleanup process will take quite a while with a magnetic drive.

You can start Disk Cleanup from a command line, as Microsoft explains here.


You can also use the Windows Storage Sense feature to automatically do some of the disk cleanup for on all of your drives.


image

Figure 4: Storage Sense Display


image

Figure 5: Storage Sense Configuration

Initial CrystalDiskMark Results for Intel Optane 900p

I have been building a new desktop workstation based on an AMD Ryzen Threadripper 1950X processor (which I will be describing in much more detail in a subsequent blog post). I am planning on using one of the brand new 480GB Intel Optane SSD 900p PCIe cards as my boot drive. Initially, I installed Windows 10 Professional, Version 1709 on a pretty lackluster OEM 256GB Toshiba M.2 NVMe drive that I had lying around. My plan is to clone that drive to the Intel Optane 900p.

I also have a couple of 1TB Samsung 960 PRO M.2 NVMe cards in this machine, so I thought I would run a couple of quick CrystalDiskMark tests on the two drives. One thing to keep in mind is that CrystalDiskMark is not the best synthetic benchmark to use to show off the strengths of the Optane 900p.

Traditional NAND-based SSDs excel at very high queue depths that are not usually encountered outside of synthetic benchmarks (especially for random read performance). Optane 900p SSDs perform extremely well for random reads at low queue depths. This gives you outstanding responsiveness and performance where it is going to be most noticeable in daily usage.

You can see part of this effect in the bottom row of CDM test results for reads, where the Optane 900p is doing about 4.3X more 4K IOPS than the Samsung 960 PRO at a queue depth of 1. A better test for this will be Microsoft DiskSpd, which can also measure the latency during the test run.

Here are some of the primary advantages of the Intel Optane 900p compared to current NAND flash storage.

 

    • High random read and write performance
    • High performance at low queue depths
    • High simultaneous read and write performance
    • High read and write performance at small capacity points
    • High performance maintained as the drive fills with data
    • Higher endurance than current NAND technology

 

image

Figure 1: 1TB Samsung 960 PRO with Samsung NVMe driver

 

image

Figure 2: 480GB Intel Optane 900p with Intel NVMe driver

 

SQLskills SQL101: Sequential Throughput

Over the past couple of months, SQLskills has embarked on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

The Importance of Sequential Throughput for SQL Server

A number of very common, important operations that are often executed by SQL Server are potentially performance limited by the sequential throughput of the underlying storage subsystem. These include:

  1. Full database backups and restores
  2. Index creation and maintenance work
  3. Initializing transactional replication snapshots and subscriptions
  4. Initializing AlwaysOn AG replicas
  5. Initializing database mirrors
  6. Initializing log-shipping secondary’s
  7. Running DBCC CHECKDB
  8. Relational data warehouse query workloads
  9. Relational data warehouse ETL operations

Despite this, I often see DBAs having to contend with extremely low sequential performance on their various database servers, to the detriment of their ability to meet their SLAs for things like RPO and RTO (not to mention their sanity). This being the case, what if anything can you do to improve this situation?

One thing you should do is to do some storage subsystem benchmarking with tools like CrystalDiskMark and Microsoft DiskSpd, to find out what the potential performance of each logical drive is on the underlying machine where your SQL Server instance is running.

You can also run some simple queries and tests from SQL Server itself to see what level of sequential performance you are actually getting from your storage subsystem (which is much harder for storage administrators, SAN administrators, and storage vendors to dispute). One example is running a full database backup to a NUL device, to see the ultimate sequential read performance from where your data and log files are located. Another example is running a SELECT query with an index hint to force the query to do a clustered index scan or table scan from a relatively large table.

Note: You should do these kinds of tests during a maintenance window or ideally, before a new instance of SQL Server goes into Production. Otherwise, your testing could negatively affect your Production environment or the other Production activity could skew your test results.

Beyond that, here are some general steps you can take to improve overall storage system performance:

  1. Make sure you have power management configured correctly at all levels (BIOS power management, hypervisor power policy, and Windows Power Plan)
  2. Make sure you have Windows Instant File Initialization enabled
  3. Make sure you are not under memory pressure (to reduce stress on your storage subsystem)
  4. Make sure you are using the latest version of SQL Server
  5. Make sure you have installed the latest Service Pack and Cumulative Update for your version of SQL Server
  6. Favor Enterprise Edition or Standard Edition (because it has better I/O performance)
  7. Use compression to reduce your I/O requirements (backup compression, data compression, and columnstore indexes
  8. Make sure your indexes are tuned appropriately (not too many and not too few)
  9. Keep your index fragmentation under control

You can watch my Pluralsight course SQL Server: Improving Storage Subsystem Performance to get more details about this subject. You can also read my article on SQLPerformance.com, Sequential Throughput Speeds and Feeds to get some more technical details about sequential throughput.