So I was going through the features in SQL Server 2014, seeing how and where they fit into the canon of SQL Server’s T-SQL. I’m going to add some of this for my advanced T-SQL Immersion Event at the end of September. SQL Server 2014 is supposed to be a release with no T-SQL changes at all. Not windowing enhancements, not time interval data types, nothing. So the effect on the T-SQL programmer should be minimal, right? Well, maybe not minimal.
The most visible new feature in SQL Server 2014 is known as Hekaton. In a perfect world, Hekaton tables would simply be a deployment decision by the DBA. The miracle “go faster” switch, just buy some more memory move your tables over. But it turns out that the surface area of tables and statements on tables in wide and deep. And in “V1” of memory-optimized tables there are a number of constructs that they don’t support. These include some data types, as well as constraints and triggers. So T-SQL developers need to “make up” for lack of these by coming up with their own scheme, no mean feat although there are some examples of this type of code in the Books Online. And that’s just the beginning, the list of unsupported items means that the move of a table to memory-optimized won’t be a simple change to DDL.
As another example, the change from pages to hash buckets in the in-memory engine gives new credibility (well, gives credibility) to the concept of uniqueidentifier as primary key. With hash buckets, spreading out the key values won’t cause fragmentation, it should actually be as fast or faster. It is still a bigger data type though, which means more to keep in memory. Just buy a bit more memory and a fast disk for the log.
T-SQL compiled stored procedures that calls table-specific code is needed to make memory-optimized tables run as fast as possible. In fact, compiled stored procedures can only access memory-optimized tables. These aren’t your traditional stored procedures. They must be defined with a transaction isolation level, as well as other options. And these stored procedures also have a number of limitations, including limiting usage of some built-in functions, tempdb, and collation choice. They also compile on first use (when you create them initially and when you bring up SQL Server) and don’t recompile because of statistics changes. Finally, these procedures need error handling because, depending on isolation level, they might rollback due to the multi-version engine. Ah, the multi-version engine.. That may affect T-SQL programmers a little.
There’s more than just rowstore and linked list memory-optimized tables. This release can use columnstore (xVelocity) as the primary storage vehicle, rather than as an additional copy of your warehouse data as in SQL Server 2012. And the columnstore is updatable now. And because columnstore is “V2” now, programmers can “undo” almost every workaround that was needed to take full advantage of “Batch Mode” processing because constructs like outer joins will now use Batch Mode automatically.
Let’s not forget the new cardinality estimation. If you’re fearing plan regressions, you can always opt out, but sounds like some tuning, as well as better estimates leading to better plans, will be going on. And there’s always parallelism in SELECT..INTO too.
So ever if they never add another T-SQL feature to SQL Server 2014 by RTM, there’s lots to talk about that pertains to the way you’ll write T-SQL in future as well as, of course, lots about how we write it for “best results” today. See you later this month.
@bobbeauch
One thought on “Integrating SQL Server 2014 constructs into the T-SQL canon”
Thanks for this beautiful post!!
I am curious to know about new features related to Big Data mapping in SQL Server 2014.
Do you idea related to this?
Comments are closed.