SQL Server 2014 memory-optimized table variables – metadata and garbage collection

I’d been doing some experimentation with memory-optimized table types in SQL Server 2014 and thought it was about time to write about it. Yesterday however, there was an excellent post on them at SQL Server team blog. So I’ll stay away from restating their points for the most part, and encourage you to read their post.

First off, although you can use memory-optimized table variables outside of compiled stored procedures, I think the biggest use for them will be in compiled procedures, mainly because compiled procedures don’t support #temporary tables. So, everywhere folks use and abuse temp tables, they’ll be using memory-optimized table variables as a replacement. Be careful of this however, because memory-optimized table variables, like “ordinary” table variables, have no statistics and, absent of an OPTION RECOMPILE hint, other SQL statement always use an estimate of one row for them.

Memory-optimized table variables must be strongly typed, so you start by creating a table type with the CREATE TYPE DDL statement. You can’t use a durability specification in this statement, but that only makes sense, because memory-optimized table variables are non-durable by definition. They don’t participate in availability group failover. The compiled code module for them is created at CREATE TYPE time and uses a naming conversion for a ‘v’ in the name (e.g. xtp_v_15_277576027.dll), rather than ‘t’ for memory-optimized table modules or ‘p’ for compiled procedure modules. Because there is no data load at SQL Server startup, this module isn’t created after a instance restart until the first time its used to create a variable.

It’s important to realize that these table variables don’t live in tempdb; they live in the memory-optimized object space of the database in which they are created. This takes pressure off tempdb, but they can compete for memory with other memory-optimized objects in the same database. In a short test, as I was doing a bunch of inserts into a memory-optimized table, I instanciated a memory-optimized table variable and filled it with a large number of rows. The inserts into the table (not the table variable) started failing because the combination exceeded my memory allocation for optimized objects in that database. It’s also possible for a series of memory optimized variables that are active at the same time to exceed the memory allocation and fail due to lack of memory. Granted that because table variables are usually kept to a small number of rows and have a limited lifetime, this may not be an big issue; my concern was the proliferation that could result because they replace temp tables in compiled stored procedures.

There are a few more surprises. Individual memory-optimized table variables have no metadata like “ordinary” table variables do, so you can’t tell how many you have active at a time by consulting the metadata. They do appear in sys.dm_db_xtp_memory_consumers as memory_consumer_type of PGPOOL. This consumer is “used for all table variables and includes usage for serializable scans” according to the BOL description of that DMV. However, the pool doesn’t appear in sys.dm_db_xtp_table_memory_stats at all. This means that it doesn’t appear in the SSMS “Memory used by memory-optimized objects” report, even under the “system allocated memory” category. This is useful to know if you use the report or sys.dm_db_xtp_table_memory_stats to monitor usage.

Finally, about memory-optimized table variables and garbage collection. Although these variables may participate in garbage collection during their lifetime (e.g. for updates or deletes), when the variable goes out of scope, the memory it used is released in it’s entirety (sys.dm_db_xtp_memory_consumers/used_bytes goes to zero). There is a small amount of memory (up to ~62.5 mb or so, in my tests) that remains allocated in this pool after the last variable goes out of scope, but under low-memory conditions, even this is given back.

Bear in mind that this is all based upon observed behavior in SQL Server 2014 CTP2 and may change by RTM. If it does, I’ll update this post.

Cheers, @bobbeauch

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.