Two New TPC-E Submissions for SQL Server 2012

Just when I was not looking, two new official TPC-E results have been posted in the last week. IBM has a 3218.46 TPC-E score for an IBM System x3850 X5 that has four Intel Xeon E7-4870 processors, while HP has an 1881.76 TPC-E score for an HP ProLiant DL380p Gen8 system with two Intel Xeon E5-2690 processors.

What is notable about this is that the 3218.46 score for a four-socket Xeon E7-4870 system is significantly higher than we have seen for similar four-socket Xeon E7-4870 systems in the past. An especially good comparison is between an IBM System x3850 X5 that was submitted on June 27, 2011 and this latest result for an IBM System x3850 X5 system that was submitted on November 28, 2012.  As you can see in Table 1, the newer submission for the same model server has a 12.4% higher score than the older submission. This is for the exact same model server, with the exact same number and model of processors.  The first big difference that jumps out is that the newer submission is running SQL Server 2012 Enterprise Edition on top of Windows Server 2012 Standard Edition, while the older submission is running SQL Server 2008 R2 Enterprise Edition on top of Windows Server 2008 R2 Enterprise Edition.

DateModelProcessorOperating SystemSQL Server Version/EditionTPC-E Score
6/27/2011System x3850 X5Xeon E7-4870Windows Server 2008 R2 EnterpriseSQL Server 2008 R2 Enterprise2862.61
11/28/2012System x3850 X5Xeon E7-4870Windows Server 2012 StandardSQL Server 2012 Enterprise3218.46

Table 1: Comparing Two IBM System x3850 X5 TPC-E Submissions

Could this 12.4% performance jump be simply due to the newer operating system and the newer version of SQL Server?  It is very possible that there were some low level improvements in Windows Server 2012 that work in conjunction with SQL Server 2012 to improve performance (similar to what we saw with Windows Server 2008 R2 combined with SQL Server 2008 R2). With Windows Server 2008 R2, Microsoft did some low-level optimizations so that they could scale from 64 logical processors to 256 logical processors. This work also benefitted smaller systems with fewer logical processors.  I think it is likely that some similar work was done with Windows Server 2012, so that it could scale from 256 logical processors to 640 logical processors, so that might explain some of the performance increase. I have some questions in to some of my friends at Microsoft, trying to get some more detailed information about this possibility.

It is also possible that there were improvements in SQL Server 2012 all by itself that contributed to the performance increase. Another possibility is that the TPC-E team at IBM just did a much better job on this newer system. If you dive deeper into the two submissions, you will notice some other differences in the hardware and the environment for the test.  The newer submission is a system with 2048GB of RAM and (126) 200GB SAS SSDs for database storage, with a 13.3TB initial database size, while the older submission is a system with 1024GB of RAM and (90) 200GB SAS SSDs for database storage, with a 11.6TB initial database size. As long as you have sufficient I/O capacity to drive the TPC-E workload, the TPC-E score is usually limited by processor performance, so I don’t really think that the RAM and I/O differences are that significant here.

What do you think about this?  I would love to hear your opinions and comments!

Speaking at the Rocky Mountain Oracle Users Group Training Days on February 12, 2013

In a change of pace, I will be speaking at an Oracle event in February. It is the Rocky Mountain Oracle Users Group Training Days that goes on from February 11-13, 2013 in Denver, Colorado. This is a pretty large event, with a large list of presentations.  I will be doing a hardware presentation, which is actually pretty relevant for SQL Server or for Oracle. Who knows, maybe I can convince some Oracle DBAs to take a look at SQL Server?

Here is my abstract:

Hardware 301: Diving Deeper into Database Hardware
Microsoft made some sweeping changes to their software licensing model for SQL Server 2012 Enterprise Edition, moving from socket-based licensing to core-based licensing. This alters much of the conventional criteria for hardware selection for database servers that will be running SQL Server 2012. This also caused a significant amount of angst in some quarters, with fears of huge increases in SQL Server 2012 licensing costs compared to older versions of the product. This session will cut through the uncertainty and hype to show you how to properly evaluate and choose your database hardware for usage with SQL Server 2012. You will learn how to choose hardware for different types of workloads and how to get the best performance and scalability for the lowest licensing cost.


Oracle versus Microsoft on Vaporware Fantasies

A couple of weeks ago, Oracle’s SVP of Communications Bob Evans wrote a pretty inflammatory blog post that was on ForbesBrandVoice, which seeks to “connect marketers to the Forbes audience”. The original post was removed on November 20, so my link points to a cached copy. In the post, he calls the Microsoft Hekaton project vaporware. Quoting Oracle’s Andy Mendelsohn: “Since Microsoft has no competitive product on the market today, they’ve invented this vaporware project ‘Hekaton’ as their response.” Bob Evans goes on to say:

“Even if Microsoft is able to deliver that feature in the next 24-30 months, Mendelsohn said, the objective reality will be that Microsoft’s in-memory database will be available on industry-standard hardware—end of story. And that, way off in the year 2014 or 2015, will simply not match up even to what Oracle has today.”

This article was noticed, and forwarded around in the SQL Server community, causing much amusement. Somewhat surprisingly, Microsoft’s Nick King responded in a very forceful way with this blog post. Here is a small excerpt:

“So I challenge Oracle, since our customers are increasingly looking to In-Memory technologies to accelerate their business. Why don’t you stop shipping TimesTen as a separate product and simply build the technology in to the next version of your flagship database? That’s what we’re going to do.

This shouldn’t be construed as a “knee-jerk” reaction to anything Oracle did. We’ve already got customers running ‘Hekaton’ today, including online gaming company Bwin, who have seen a 10x gain in performance just by enabling ‘Hekaton’ for an existing SQL Server application. As Rick Kutschera, IT Solutions Engineer at Bwin puts it, “If you know SQL Server, you know Hekaton”. This is what we mean by “built in”. Not bad for a “vaporware” project we just “invented”.”

As far as Hekaton goes, it is definitely not vaporware. I have been aware of it for quite some time, but I could not talk about it until Microsoft announced it and demonstrated it at the PASS Summit 2012. Microsoft’s Dave Campbell writes about it here. The fact that Microsoft has actual customers already using the technology means that it will likely show up sooner than Oracle expects (not that I have any inside knowledge about that).

So what does this mean to you, as you are busy running your current SQL Server infrastructure?  Well first of all, keep in mind that Hekaton is targeted at very volatile OLTP workloads, where people are running into high latch waits as they try to insert many tens of thousands of rows per second into a table. It will let you convert individual “hot” tables into Hekaton tables that will have to fit into main memory on your database server. It is not clear yet how this will play with table partitioning. You will also be able to convert standard T-SQL stored procedures (that only use Hekaton tables) into compiled stored procedures that run much faster.

Even though Microsoft has not officially announced it, you can bet that this will be an Enterprise Edition only feature. You should be looking at your server hardware to determine how old it is and how much RAM it can hold. You also should take a fresh look at server class DRAM pricing, which has fallen to extremely low levels. Server class RAM is really an amazing bargain! If you are running SQL Server Enterprise Edition, you don’t want to skimp on your RAM.

It is pretty common in the SQL Server community to use the DRAM pricing from as a benchmark for server class DRAM pricing. Currently, 32GB DDR3 ECC DIMMs still have a very substantial price premium, being priced at about $47/GB, while 16GB DDR3 ECC DIMMs are priced at about $15/GB.  8GB DDR3 ECC DIMMs are even more affordable at about $11/GB, but their lower capacity makes them much less attractive for database servers when you are trying to maximize your total capacity.

Current two-socket servers that use the Intel Sandy Bridge-EP platform have 24 DIMM slots so they can support 384GB of DDR3 ECC RAM using 16GB DIMMs. They also have PCI-E 3.0 support. It would only cost $5760.00 to fully populate one of those model servers with (24) 16GB DIMMs. Based on TPC-E benchmark scores, Sandy Bridge-EP systems that have the Intel Xeon E5-2690 processor have the best single-threaded performance and give the best price performance for SQL Server 2012 OLTP workloads.

I am predicting that we will see the follow-on Ivy Bridge-EP platform roll out in Q2 2013, with something like an Intel Xeon E5-2690 V2 processor. This should be pin-compatible with current Sandy Bridge-EP servers (such as the Dell PowerEdge R720 and the HP DL380 Gen 8), which will make them available in existing models even more quickly. By then, 32GB DRAM prices will probably be more affordable.  Looking further into the future, the Intel Haswell-EP platform will probably be available by Q2 2014. It will be a full Tock release from Intel, so it will probably support even more memory with better memory controllers.