{"id":745,"date":"2010-02-15T12:24:00","date_gmt":"2010-02-15T12:24:00","guid":{"rendered":"\/blogs\/paul\/post\/Benchmarking-1-TB-table-population-(part-3-separating-data-and-log-files).aspx"},"modified":"2017-04-13T09:51:38","modified_gmt":"2017-04-13T16:51:38","slug":"benchmarking-1-tb-table-population-part-3-separating-data-and-log-files","status":"publish","type":"post","link":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/","title":{"rendered":"Benchmarking: 1-TB table population (part 3: separating data and log files)"},"content":{"rendered":"<p>\n<font face=\"verdana,geneva\" size=\"2\">Blog&nbsp;posts in this series: <\/font>\n<\/p>\n<ul>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">For the hardware&nbsp;setup I&#39;m using, see <\/font><a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-hardware-setup\/\"><font face=\"verdana,geneva\" size=\"2\">this post<\/font><\/a><font face=\"verdana,geneva\" size=\"2\">. <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">For the baseline performance measurements for this benchmark, see <\/font><a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-1-the-baseline\/\"><font face=\"verdana,geneva\" size=\"2\">this post<\/font><\/a><font face=\"verdana,geneva\" size=\"2\">. <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">For the increasing performance through log file IO optimization, see <\/font><a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-2-optimizing-log-block-io-size-and-how-log-io-works\/\"><font face=\"verdana,geneva\" size=\"2\">this post<\/font><\/a><font face=\"verdana,geneva\" size=\"2\">. <\/font>\n\t<\/div>\n<\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In the <\/font><a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-2-optimizing-log-block-io-size-and-how-log-io-works\/\"><font face=\"verdana,geneva\" size=\"2\">previous post<\/font><\/a><font face=\"verdana,geneva\" size=\"2\"> in the series, I optimized the log block size to get better throughput on the transaction log, but it was very obvious that having the log file and the data file on the same RAID array is a bottleneck. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">Over the last couple of weeks I&#39;ve been running some tests with the log and data files on separate RAID arrays, and this post will explain what I&#39;ve found. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The first thing I did was setup the systems for remote access so I can log in to them from anywhere in the world. This involved: <\/font>\n<\/p>\n<ul>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">Changing the TCP\/IP port that Remote Desktop listens to (<\/font><a href=\"https:\/\/support.microsoft.com\/kb\/306759\"><font face=\"verdana,geneva\" size=\"2\">KB 306759<\/font><\/a><font face=\"verdana,geneva\" size=\"2\">) <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">Allowing that port through Windows Firewall (having to perform this step wasn&#39;t obvious) <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">Turning on port forwarding on our Internet-facing router for that port to the right server <\/font>\n\t<\/div>\n<\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">It&#39;s very cool being able to play with these servers from anywhere &#8211; and to show live servers-under-load during a class, as I could last week. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The previous two tests were performed using a single 2TB RAID-10 array comprised of 8 300GB 15k SCSI drives. The new array I added to the mix for this test is comprised of various numbers of 1TB 7.2k SATA drives. I tested: <\/font>\n<\/p>\n<ul>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">4 drives in a RAID-10 configuration, giving 2TB <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">6 drives in a RAID-10 configuration, giving 2TB usable <\/font>\n\t<\/div>\n<\/li>\n<li>\n<div>\n\t<font face=\"verdana,geneva\" size=\"2\">8 drives in a RAID-10 configuration, giving 2TB usable <\/font>\n\t<\/div>\n<\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The volume size limit that the Dell MD3000i arrays allow is 2TB, so the 2nd and 3rd array configurations described above made use of extra spindles but with a lot of wasted space. In this set of tests I limited myself to a single data file, but coming up I&#39;ll try multiple data files which will enable more efficient usage of the available raw disk capacity. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">I used the same scripts as in the previous tests, with 128 concurrent connections each inserting 4.1KB rows in batches of 10 inserts per transaction into the same table, for a total of 1\/128 TB in each connection. The data file is pre-sized to 1TB. The log file is pre-sized to 250MB, with a 50MB auto-growth increment. These are invariants from the previous test (and are also something obvious that can be changed for better performance, but that&#39;s not what this set of tests was about). <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">I tried the following configurations: <\/font>\n<\/p>\n<ul>\n<li><font face=\"verdana,geneva\" size=\"2\">Log on SCSI RAID-10, data on 4-, 6-, 8-drive SATA RAID-10<\/font><\/li>\n<li><font face=\"verdana,geneva\" size=\"2\">Data on SCSI RAID-10, log on 4-, 6-, 8-drive SATA RAID-10<\/font><\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">And very interesting results they are too! In the previous set of tests, the best performance I could get was 21167 seconds for test completion, with the data and log files sharing the SCSI RAID-10 array. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>Moving the log to a different array<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg\" alt=\"\" width=\"485\" height=\"273\" \/><img decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logthroughput.jpg\" alt=\"\" width=\"485\" height=\"274\" \/> <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The left-hand graph above shows that moving the log file off managed to get the time down to 20842 seconds in the best case, but that&#39;s using 8 drives. The 6- and 8-drive times were basically the same &#8211; which shows that 4-drives didn&#39;t provide enough IO parallelism for the load SQL Server was pushing through the array, but moving to 6 drives provided basically enough so that 8 drives didn&#39;t lead to a big performance gain.<\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">Overall, I&#39;d say there was no real performance gain from moving the log file to a different array. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">As far as log disk queue lengths go, when the log was on the 4-drive array, the average log write disk queue length was in the 20s. For the 6- and 8-drive cases, the queue length dropped to low single-digits. It didn&#39;t drop right down to around zero because the perfmon counter is measuring the queue length as far as Windows sees it &#8211; and the iSCSI traffic was being bottle-necked through a single NIC, as we&#39;ll see later. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">For data disk queue lengths, when the log was on the 4-drive array, the average data write queue length was around 5, with spikes to 20+. For the 6- and 8-drive cases, the queue length increased to an average of 10-15, with spikes to 30+. Clearly the log drive bottle-neck in the 4-drive case was lowering the overall transaction throughput, which reduced the load on the data drives. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In terms of iSCSI performance, I had the MD3000is take a 2-hour snapshot of each array&#39;s performance with the varying size of the log array. The results are in the right-hand graph above. You can see that the throughput of the data array remains basically static, but the log array throughput increases with more drives thrown into the mix. Nothing stunning here. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In the 4-drive case, the transaction log grew to 5.5GB but in the 6- and 8-drive cases the log grew to over 8GB &#8211; which is what I&#39;d expect given the higher transaction throughput. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>Moving the data to a different array<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\"><img decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3dataelapsedtime.jpg\" alt=\"\" width=\"485\" height=\"273\" \/><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3datathroughput.jpg\" alt=\"\" width=\"485\" height=\"273\" \/> <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">Edit 02\/28\/10: When performing the tests for the next post in the series, I found the results from this&nbsp;test didn&#39;t make sense. I went back to re-run the single-NIC tests and found what I feared &#8211; the original tests didn&#39;t complete properly. The&nbsp;corrected results are below.&nbsp;<\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The left-hand graph above shows that moving the data file off managed to get the time down to&nbsp;25400 seconds in the best case, but that&#39;s using 8 drives. The data file&#39;s throughput requirements are clearly higher than the log file&#39;s (for this specific benchmark test, not as a general statement by *any means*). It&#39;s clear that having the data file on the slower SATA array wasn&#39;t going to lead to any better performance than having it on the SCSI array.<\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">As far as log disk queue lengths go, they were between 1-3 in all cases, as the 8-drive SCSI array could clearly provide enough IO throughput to satisfy the log&#39;s need. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">For data disk queue lengths, when the data was on the 4-drive array, the average data write queue length was around 30, with wild spikes. For the 6- and 8-drive cases, the queue length decreased to an average of 20, and then down to 10 with spikes to 30+.&nbsp;When there was no checkpoint or lazy writer activity, the write queue lengths dropped to zero, as I&#39;d expect.&nbsp; <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In terms of iSCSI performance, I had the MD3000is take a 2-hour snapshot of each array&#39;s performance with the varying size of the data array. The results are in the right-hand graph above. You can see that the throughput of the log array remains pretty static, but the data array throughput increases *dramatically* with more drives thrown into the mix. Again, nothing stunning here. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In the 4-drive case, the transaction log grew to 19GB but in the 6- and 8-drive cases the log grew to around 8GB &#8211; again I&#39;d expect this. It seems that for this workload, with these default checkpoint settings, and with 8GB of server memory, the log needs to be around 8GB when there&#39;s adequate IO throughput. This is something I&#39;ll be trying to address in future tests. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>Perfmon captures<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">I took a few perfmon snapshots during the various tests to provide some details of the system performance. I&#39;m not going to go into as much detail explaining the various counters and what they mean, I did that in the previous post, but I will point out interesting details. These aren&#39;t necessarily meant to be representative of what you&#39;ll see running similar tests &#8211; they&#39;re of interesting times during the traces that I thought would be fun to look at and understand. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>1) Log file on the 6-drive SATA array<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">&nbsp;<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/36driver10log.jpg\" alt=\"\" width=\"751\" height=\"535\" \/> <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">This graph is highlighting the <font face=\"courier new,courier\">Avg. Disk Write Queue Length<\/font> for the K: data array (the black line). You can see that when the <font face=\"courier new,courier\">Checkpoint pages\/sec<\/font> (pink line) and <font face=\"courier new,courier\">Lazy writes\/sec<\/font> (light green line)&nbsp;both drop to zero, the write&nbsp;queue length drops to zero &#8211; as there&#39;s no other activity on that RAID array. When either or both of these start up again, the write queue length spikes wildly. Other things to note: <\/font>\n<\/p>\n<ul>\n<li><font face=\"verdana,geneva\" size=\"2\">The <font face=\"courier new,courier\">Avg. Disk Write Queue Length<\/font> for the I: log array (the green line at the bottom) is pretty static in the low single-digits.<\/font><\/li>\n<li><font face=\"verdana,geneva\" size=\"2\">The <font face=\"courier new,courier\">Log Bytes Flushed\/sec<\/font> (the red line) tracks the Disk Write Bytes\/sec for the I: log array, except during the heavy checkpoint activity at the start of the trace &#8211; this is a period of log file auto-growth.<\/font><\/li>\n<li><font face=\"verdana,geneva\" size=\"2\">The <font face=\"courier new,courier\">Pages Allocated\/sec<\/font> (the top light blue line) is fairly static, but varies wildly during the heavy checkpoint activity. This is because the log auto-growth is effectively stalling transaction throughput.<\/font><\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>2) Log file on the 8-drive SATA array<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">&nbsp;<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/38driver10log.jpg\" alt=\"\" width=\"751\" height=\"481\" \/> <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">This is highlighting the <font face=\"courier new,courier\">Pages Allocated\/sec<\/font> (the black line), which remains basically static, and is a measure of the overall transaction throughput. At this point in the test, the log has auto-grown as far as it will and performance is stable. Other things to note: <\/font>\n<\/p>\n<ul>\n<li><font face=\"verdana,geneva\" size=\"2\">The <font face=\"courier new,courier\">Log Bytes Flushed\/sec<\/font> also remains static, as there&#39;s a constant transaction workload with no interruptions for log growth, and directly correlates with the spikes and troughs in the <font face=\"courier new,courier\">Pages Allocated\/sec<\/font> counter.<\/font><\/li>\n<li><font face=\"verdana,geneva\" size=\"2\">The <font face=\"courier new,courier\">Avg. Disk Write Queue Length<\/font> for the I: log array (the bottom green line) is static.<\/font><\/li>\n<\/ul>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>3) Data on the 4-drive SATA array<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">&nbsp;<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3dataon4drive72r10.jpg\" alt=\"\" width=\"750\" height=\"473\" \/> <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">This represents the worst combination of array sizes and file placements, and is a pretty chaotic trace. It&#39;s highlighting the <font face=\"courier new,courier\">Avg. Disk Write Queue Length<\/font> of the I: data array (the black line) and you can see that it varies wildy between 0 and 80(!!!), clearly showing that the data file performance is hampered by under-powered RAID configuration. All other perf counters around the data array vary wildly &#8211; clearly not a recipe for high performance. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\"><font size=\"2\"><strong>Summary<\/strong> <\/font><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">There&#39;s clearly a performance gain to be had from separating the data and log portions of the database in this case. However, doing so has highlighted the fact that the simple networking configuration I have is now a bottleneck.<\/font>\n<\/p>\n<p>\n<font size=\"2\">Here&#39;s a Task Manager trace showing the network utilization during one of the tests:<\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\"><\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\"><\/font>\n<\/p>\n<p>&nbsp;<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/31iscsinic.jpg\" alt=\"\" width=\"888\" height=\"128\" \/> <\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">The troughs are the constant traffic going to the log array, and the sustained peaks are when there&#39;s checkpoint and\/or lazy writer activity going to the data array as well.<\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">In the next post, I&#39;ll tune the network configuration, and then I&#39;ll move on to trying multiple data files in multiple RAID arrays. <\/font>\n<\/p>\n<p>\n<font face=\"verdana,geneva\" size=\"2\">Hope you&#39;re still enjoying the series! <\/font><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Blog&nbsp;posts in this series: For the hardware&nbsp;setup I&#39;m using, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. In the previous post in the series, I optimized the log block size to get better throughput on the transaction [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[15,66],"tags":[],"class_list":["post-745","post","type-post","status-publish","format-standard","hentry","category-benchmarking","category-performance-tuning"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal\" \/>\n<meta property=\"og:description\" content=\"Blog&nbsp;posts in this series: For the hardware&nbsp;setup I&#039;m using, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. In the previous post in the series, I optimized the log block size to get better throughput on the transaction [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/\" \/>\n<meta property=\"og:site_name\" content=\"Paul S. Randal\" \/>\n<meta property=\"article:published_time\" content=\"2010-02-15T12:24:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2017-04-13T16:51:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg\" \/>\n<meta name=\"author\" content=\"Paul Randal\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Paul Randal\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/\",\"name\":\"Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal\",\"isPartOf\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg\",\"datePublished\":\"2010-02-15T12:24:00+00:00\",\"dateModified\":\"2017-04-13T16:51:38+00:00\",\"author\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg\",\"contentUrl\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Benchmarking: 1-TB table population (part 3: separating data and log files)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#website\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/\",\"name\":\"Paul S. Randal\",\"description\":\"In Recovery...\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce\",\"name\":\"Paul Randal\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g\",\"caption\":\"Paul Randal\"},\"sameAs\":[\"http:\/\/3.209.169.194\/blogs\/paul\"],\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/author\/paul\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/","og_locale":"en_US","og_type":"article","og_title":"Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal","og_description":"Blog&nbsp;posts in this series: For the hardware&nbsp;setup I&#39;m using, see this post. For the baseline performance measurements for this benchmark, see this post. For the increasing performance through log file IO optimization, see this post. In the previous post in the series, I optimized the log block size to get better throughput on the transaction [&hellip;]","og_url":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/","og_site_name":"Paul S. Randal","article_published_time":"2010-02-15T12:24:00+00:00","article_modified_time":"2017-04-13T16:51:38+00:00","og_image":[{"url":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg","type":"","width":"","height":""}],"author":"Paul Randal","twitter_misc":{"Written by":"Paul Randal","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/","name":"Benchmarking: 1-TB table population (part 3: separating data and log files) - Paul S. Randal","isPartOf":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage"},"image":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage"},"thumbnailUrl":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg","datePublished":"2010-02-15T12:24:00+00:00","dateModified":"2017-04-13T16:51:38+00:00","author":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce"},"breadcrumb":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#primaryimage","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg","contentUrl":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2010\/2\/3logelapsedtime.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/benchmarking-1-tb-table-population-part-3-separating-data-and-log-files\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.sqlskills.com\/blogs\/paul\/"},{"@type":"ListItem","position":2,"name":"Benchmarking: 1-TB table population (part 3: separating data and log files)"}]},{"@type":"WebSite","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#website","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/","name":"Paul S. Randal","description":"In Recovery...","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.sqlskills.com\/blogs\/paul\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce","name":"Paul Randal","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g","caption":"Paul Randal"},"sameAs":["http:\/\/3.209.169.194\/blogs\/paul"],"url":"https:\/\/www.sqlskills.com\/blogs\/paul\/author\/paul\/"}]}},"_links":{"self":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts\/745","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/comments?post=745"}],"version-history":[{"count":0,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts\/745\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/media?parent=745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/categories?post=745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/tags?post=745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}