In-memory OLTP – read_set, write_set, and scan_set

I was looking through some in-memory OLTP DMVs in SQL Server 2014 and came across these in the DMV sys.dm_xtp_system_memory_consumers:

Thought it might be nice to see how these work, but explain first I’ll explain what they’re about. By the way, a lookaside is just a cache, sometimes of pointers to memory addresses. You’ve probably heard of the L1 and L2 cache built into CPUs.

These sets and their corresponding lookasides have to do with how memory-optimized tables do transactions against multi-version concurrency control (MVCC) tables (memory-optimized tables in SQL Server 2014 use MVCC). The three basic transaction types supported by MVCC tables (lowest to highest isolation) are snapshot, repeatable read, and serializable. In all isolation levels, reads are performed as of the beginning of the transaction. So any of these levels will read rows where the begin-time of the transaction is between the begin-timestamp and end-timestamp of the row.

Snapshot transactions require no additional processing, simply read the correct versions. Repeatable read means that the version you’d read at the end of the transaction must be the same as you’d read at the end. Serializable means repeatable read and in addition, if you’re doing a scan (e.g. select rows where id is between 10 and 20) no new rows in your range were added between beginning and end of your transaction. READ_SET and SCAN_SET have to do with repeatable read and serializable transactions, respectively.

SQL Server MVCC accomplishes the additional checks by **re-reading the rows at commit time**. Sounds expensive. It needs to make this re-reading fast, so instead of doing the whole operation again, repeatable read transactions keep a pointer to each row they’ve read in a READ_SET and check the pointer again at commit time to see if they’re the same. You can see this behavior on a system with no other activity, in sys.dm_xtp_system_memory_consumers.

— execute sys.dm_xtp_system_memory_consumers and check the READ_SET_LOOKASIDE amount
select * from sys.dm_xtp_system_memory_consumers;
— now repeatable read transaction using AdventureWorks memory-optimized table sample
begin transaction
select * from Sales.SalesOrderHeader_inmem with (repeatableread);
— note READ_SET_LOOKASIDE should be bigger
select * from sys.dm_xtp_system_memory_consumers;

For a serializable transaction you need to ensure read consistency with the READ_SET and also ensure scans will produce the same rows and no additional rows with SCAN_SET. Choose a different table if you want to see them both increase, because, if there’s no other changes in Sales.SalesOrderHeader_inmem (from the first experiment) the READ_SET_LOOKASIDE will retain the same rows/same size:

— execute sys.dm_xtp_system_memory_consumers and check the READ_SET_LOOKASIDE and SCAN_SET_LOOKASIDE amount
select * from sys.dm_xtp_system_memory_consumers;
— now repeatable read transaction using AdventureWorks memory-optimzed table sample
begin transaction
select * from Production.Product_inmem with (serializable);
— note READ_SET_LOOKASIDE and SCAN_SET_LOOKASIDE should be bigger
select * from sys.dm_xtp_system_memory_consumers;

So what about the WRITE_SET then? Transactions that update MVCC rows replace the end-timestamp of the rows they update with the transaction ID while the transaction is running. A bit indicates whether the information is an end-timestamp or a transaction ID. The transaction ID also prohibits concurrent updates of the same row (it’s not a lock, transactions that attempt concurrent update will simply fail and roll back the transaction). They’ll also insert new rows with a transaction ID as a begin-timestamp.

Transaction commit happens like this:
Acquire an end-timestamp
Validation (see READ_SET and SCAN_SET above)
Harden to the transaction log
Update the timestamps in the updated/new rows

The WRITE_SET is used to make updating the timestamps fast. Because this entry is getting a bit long, I’ll leave it to the reader to observe WRITE_SET activity using the DMV. And I’ll also leave it to you to figure out how the transaction commit series of events guarantees transactionally consistent data. Hint: edge-cases and additional information is available in my previous blog entries of the subject.

Cheers, @bobbeauch

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.