(Look in the Misconceptions blog category for the rest of the month’s posts and check out the 60-page PDF with all the myths and misconceptions blog posts collected together when you join our Insider list, plus my online Myths and Misconceptions training course on Pluralsight.)
Nested transactions are an evil invention designed to allow developers to make DBAs’ lives miserable. In SQL Server, they are even more evil…
Myth #26: nested transactions are real in SQL Server.
Nested transactions do not actually behave the way the syntax would have you believe. I have no idea why they were coded this way in SQL Server – all I can think of is someone from the dim and distant past is continually thumbing their nose at the SQL Server community and going “ha – fooled you!!”.
Let me explain. SQL Server allows you to start transactions inside other transactions – called nested transactions. It allows you to commit them and to roll them back.
The commit of a nested transaction has absolutely no effect – as the only transaction that really exists as far as SQL Server is concerned is the outer one. Can you say ‘uncontrolled transaction log growth’? Nested transactions are a common cause of transaction log growth problems because the developer thinks that all the work is being done in the inner transactions so there’s no problem.
The rollback of a nested transaction rolls back the entire set of transactions – as there is no such thing as a nested transaction.
Your developers should not use nested transactions. They are evil.
If you don’t believe me, here’s some code to show you what I mean. First off – create a database with a table that each insert will cause 8KB in the log.
CREATE DATABASE [NestedXactsAreNotReal]; GO USE [NestedXactsAreNotReal]; GO ALTER DATABASE [NestedXactsAreNotReal] SET RECOVERY SIMPLE; GO CREATE TABLE [t1] ([c1] INT IDENTITY, [c2] CHAR (8000) DEFAULT 'a'); CREATE CLUSTERED INDEX [t1c1] ON [t1] ([c1]); GO SET NOCOUNT ON; GO
Test #1: Does rolling back a nested transaction only roll back that nested transaction?
BEGIN TRAN OuterTran; GO INSERT INTO [t1] DEFAULT VALUES; GO 1000 BEGIN TRAN InnerTran; GO INSERT INTO [t1] DEFAULT Values; GO 1000 SELECT @@TRANCOUNT, COUNT (*) FROM [t1]; GO
I get back the results 2 and 2000. Now I’ll roll back the nested transaction and it should only roll back the 1000 rows inserted by the inner transaction…
ROLLBACK TRAN InnerTran; GO
Msg 6401, Level 16, State 1, Line 1 Cannot roll back InnerTran. No transaction or savepoint of that name was found.
Hmm… from Books Online, I can only use the name of the outer transaction or no name. I’ll try no name:
ROLLBACK TRAN; GO SELECT @@TRANCOUNT, COUNT (*) FROM [t1]; GO
And I get the results 0 and 0. As Books Online explains, ROLLBACK TRAN rolls back to the start of the outer transaction and sets @@TRANCOUNT to 0. All changes are rolled back. The only way to do what I want is to use SAVE TRAN and ROLLBACK TRAN to the savepoint name.
Test #2: Does committing a nested transaction really commit the changes made?
BEGIN TRAN OuterTran; GO BEGIN TRAN InnerTran; GO INSERT INTO [t1] DEFAULT Values; GO 1000 COMMIT TRAN InnerTran; GO SELECT COUNT (*) FROM [t1]; GO
I get the result 1000, as expected. Now I’ll roll back the outer transaction and all the work done by the inner transaction should be preserved…
ROLLBACK TRAN OuterTran; GO SELECT COUNT (*) FROM [t1]; GO
And I get back the result 0. Oops – committing the nested transaction did not make its changes durable.
Test #3: Does committing a nested transaction at least let me clear the log?
I recreated the database again before running this so the log was minimally sized to begin with, and the output from DBCC SQLPERF below has been edited to only include the NestedXactsAreNotReal database.
BEGIN TRAN OuterTran; GO BEGIN TRAN InnerTran; GO INSERT INTO [t1] DEFAULT Values; GO 1000 DBCC SQLPERF ('LOGSPACE'); GO
Database Name Log Size (MB) Log Space Used (%) Status --------------------- ------------- ------------------ ----------- NestedXactsAreNotReal 12.05469 95.81983 0
Now I’ll commit the nested transaction, run a checkpoint (which will clear all possible transaction log in the SIMPLE recovery model), and check the log space again:
COMMIT TRAN InnerTran; GO CHECKPOINT; GO DBCC SQLPERF ('LOGSPACE'); GO
Database Name Log Size (MB) Log Space Used (%) Status --------------------- ------------- ------------------ ----------- NestedXactsAreNotReal 12.05469 96.25324 0
Hmm – no change – in fact the Log Space Used (%) has increased slightly from writing out the checkpoint log records (see How do checkpoints work and what gets logged). Committing the nested transaction did not allow the log to clear. And of course not, because a rollback can be issued at any time which will roll back all the way to the start of the outer transaction – so all log records are required until the outer transaction commits or rolls back.
And to prove it, I’ll commit the outer transaction and run a checkpoint:
COMMIT TRAN OuterTran; GO CHECKPOINT; GO DBCC SQLPERF ('LOGSPACE'); GO
Database Name Log Size (MB) Log Space Used (%) Status --------------------- ------------- ------------------ ----------- NestedXactsAreNotReal 12.05469 26.4339 0
And it drops right down.
Nested transactions: just say no! (a public service announcement from the nice folks at SQLskills.com :-)
30 thoughts on “A SQL Server DBA myth a day: (26/30) nested transactions are real”
Is there no mention about the rarely used SAVE TRAN stmt??!!
You can ‘simulate’ a nested rollback with those. Are they evil as well?
Oops, missed that….Sorry
I’ve actually never seen savepoints used in any app, and never used them myself in any app.
I do mention SAVE TRAN – bottom of test #1
I actually used save points once in the following scenario
1) do a bunch of initial stuff for sure
2) do a bunch more stuff as long as the initial stuff is done
3) if it turns out you can’t do all of the "more stuff" that’s ok – roll it back, but still do the initial stuff
…save points seemed to be the way to go and they worked fine, but frankly there was probably a better way…
I’m certain you know about this – but for the benefit of other people reading:
In 2008, it is possible to create ‘properly’ nested (‘autonomous’) transactions using the new ‘remote proc transaction promotion’ option of sp_serveroption, with a loop-back linked server. Details at:
Not recommended for high-transaction environments (it does not scale any better than the equivalent SQLCLR solution using a new non-enlisted connection) but it is an interesting new feature.
Hardly evil. The behavior ranted about is the desired behavior. You absolutely want the nested transaction to rollback to the beginning; that’s the whole point. The myth is a straw man argument.
Maybe desired for you – most of the rest of the SQL Server community believes it rolls back to the most recent BEGIN TRAN. The myth is real and trips developers up all the time.
Myth of the day: developers – and vendors that design dbs for developers – care what dbas think. You’re a dba, a database administrator. At the risk of sounding like I’m belittling you or your trade: you’re not a developer, and your surface level understanding of how and why developers design transactions to behave in a particular way is outside your field of expertise, despite your opinion in this case accidentally being correct. As a dba, you have – or at least should have – no idea what my intentions are as a developer (besides your ability to parse simple dml commands that are being issued by the application). That you think you should know, or would argue that you can better do your job if you are made aware, may apply in a limited subset of trivial cases. Particularly when poorly trained developers use the database for anything other than a simple storage mechanism. But any good developer knows to keep the database – and its usage – as simple as possible. As scale becomes more and more important (the inter-webs is getting busier and busier in case you hadn’t noticed), the more obvious it is that relying on monolithic relational databases to perform application logic is a bottleneck, and a design flaw. Gone are the days that well trained developers put non-trivial application functionality inside monolithic, difficult to scale (I’m talking real scale), overly complex, databases, meaning the scope of your responsibilities and opinions are becoming increasingly narrow. Your entire article (and even website) is predicated on the assumption that developers are poorly designing applications, applications that require dba opinion due to database feature overuse, which as I explained above, just creates havoc. One could argue that there are poorly trained developers out there, thus necessitating your job as a dba. But that simply means you should be acting as a gatekeeper to the database by blocking developers from – for example – trying to write sql to handle their complex application logic in the application. That doesn’t grant the dba a licence to interject their opinion into how the poorly written code should be managed in the db, which I find dbas doing regularly. There’s a reason NoSql databases are becoming more and more popular. Yes it’s partly a fad, but it’s born out of a requirement, or rather, a non-requirement for overly complex database engines, assuming applications are well written. They encourage developers by means of feature restriction, to treat the database as they should. As a logical mechanism for storage, not a mechanism for poorly managing application logic.
And just as a side note, while my last comment went out of scope to the original post, it was the straw on the camel’s back for me after reading post after post from dbas on various websites (including this one), where dbas really need to get out from their ivory towers and quit telling developers how to use the database. The only exception to this rule is when developers are objectively mis-using the database, which only accounts for a small percentage of the garbage I hear coming from dbas.
Thanks for your comments, which I partly agree with, but mostly disagree with as they’re predicated on a view of the SQL Server development industry that isn’t reality. Yes, you’re correct that well-trained devs should not be writing crappy code – unfortunately most devs are not well trained in my experience (and I spent 10 years as a dev at DEC and Microsoft (writing portions of SQL Server), and 10 more years experiencing and dealing with crappy code written by SQL Server devs). DBAs need to be able to work with developers to let them know that their code is crappy, and developers need to understand how the system they’re programming against works, so they can write correct code. This post is an example of where SQL Server doesn’t work as a developer would intuitively think, and so devs write incorrect code that messes up the system. You can rant all you want against it, but that’s the state of a lot of SQL Server-specific development we see. Developers need to be educated, which many companies go about the wrong way, by not fostering good relations between DBA and development teams – which it sounds like you’ve experienced with DBAs telling you what to do.
“Gone are the days that well trained developers put non-trivial application functionality inside monolithic, difficult to scale (I’m talking real scale), overly complex, databases, meaning the scope of your responsibilities and opinions are becoming increasingly narrow.”. Take out the ‘well-trained’, and you have the state of a large portion of the industry today. We see this over and over when working with clients. The myth is really that most developers are well-trained, which they’re not. What’s really cool is that we’re seeing more and more developers taking our training classes which explain how SQL Server works, so they’re able to write more scalable and highly-performing code.
I am a developer (mostly C#). I am almost a very well versed database designer/programmer/optimizer. I second Paul’s reply. Beyond that I would submit that your posts are rather arrogant. The biggest problem I have encountered in my 22 years of development are poorly trained developers. After that comes arrogant developers who don’t consult their DBAs because they “know better.” Are there poorly trained DBAs out there? Sure, but in my experience the percentage of poorly trained/bad DBAs is tiny compared to poorly trained/bad developers. I don’t take comments made by DBAs as gospel but I usually consult my DBA if I am attempting something outside the realm of basic DML, especially when I have never implemented a specific feature in a production environment and they USUALLY save my bacon. Not only that, but when their suggestion doesn’t work or isn’t optimal and I find a better solution, I share it with them with a good attitude. It’s amazing the mutual respect I build with them and how they start to actually value my feedback.
Regarding the explosion of NoSql, you seem to miss the point. NoSql databases serve a different purpose (when used properly) than a RDBMS and neither one works well when used improperly.
Aaron, did you even read the article? Go to a beach near you. Chill a little :-)
I’m a long time developer. I maintain code by other long-time developers. I am running into this misconception constantly, and the problems caused by it are severe. And this has been at multiple places since the 90s, when I first started with SQL Server. I’m literally dealing with a very bad and complex case of it right now. You’re assumptions may fit what you’ve observed, but I’ve seen this across many environments and dev shops.
If the original coders of these issues you see constantly are relying on this behaviour to occur, did they not run a basic smoke test it to ensure that the error handling did behave as they expected? Did they never even check their code handles an error correctly before checking in? Did they not write any unit tests to check what happens when an error occurs? How is this “constantly” become a “severe problem”? Why are your colleagues checking in completely untested code in the first place? Why is nobody peer reviewing their code? What happened to the QA team, or just even automated integration testing?
Regardless of the general concept of nested transactions potentially being a myth, your colleagues aren’t doing their job to a professional standard if they’re not bothering to run the code that they wrote themselves before they check in, to check that any basic error is handled correctly. I can understand introducing a bug; to err is human. However “constantly” observing the same “severe problems” due to this same naïve misconception, silly assumptions and lack of testing: something’s gone wrong here – it’s not the fault of the database.
@CrankyRat — No, I do NOT want the “nested” transaction rollback to roll back everything. Paul is right, this trips up developers all the time.
A subroutine or called proc that is doing something, and might want to roll back what it started, SHOULD ideally have no effect on what the caller was doing. The subroutine should not be committing on behalf of the caller. That way lies madness. I hope this gets fixed someday. SQL 2018, maybe?
Wow, I just wanted to find out if Nested transactions are a real thing and he [OP] proves that they are not. The rant that ensues, well is useless, but I hope everyone feels better. Thanks for this article!
I confuse when use set xact_abort=on with xact_state or @@trancount, if we have 50 copy and 50 delete transaction in one transaction, which i want all rollback if any error happen.
As soon as you do BEGIN TRANSACTION, you have one transaction. If you do more BEGIN TRANSACTION, you still have one transaction as far as SQL Server is concerned, even though @@trancount increases. If anything rolls back, the whole thing, back to the first BEGIN TRANSACTION rolls back.
Thanks for your article, it’s very clarifying.
There is one point I’m not sure I agree with. You say “I get the result 1000, as expected. Now I’ll roll back the outer transaction and all the work done by the inner transaction should be preserved…”
I’m not sure I agree that the inner transaction should be preserved. You may be calling multiple inner procedures from an outer procedure and you may want to roll back everything from the outer procedure, including inner transactions in the inner procedures, under some conditions.
Moreover, there could be cases in which it might not even be possible to preserve the inner transaction if the outer transaction is rolled back. See the sample below. If the inner transaction were to be preserved, an error would occur at the time the outer transaction is rolled back.
CREATE DATABASE [NestedXactsAreNotReal];
ALTER DATABASE [NestedXactsAreNotReal] SET RECOVERY SIMPLE;
CREATE TABLE [t1] ([c1] INT IDENTITY, [c2] CHAR (8000) DEFAULT ‘a’);
CREATE UNIQUE CLUSTERED INDEX [t1c1] ON [t1] ([c1]);
CREATE TABLE [t2]( [c1_ref] [int], [c2] [char](8000) DEFAULT ‘a’)
ALTER TABLE [dbo].[t2] WITH CHECK ADD CONSTRAINT [FK_t2_t1] FOREIGN KEY([c1_ref]) REFERENCES [dbo].[t1] ([c1])
ALTER TABLE [dbo].[t2] CHECK CONSTRAINT [FK_t2_t1]
SET NOCOUNT ON;
BEGIN TRAN OuterTran;
BEGIN TRAN InnerTran;
INSERT INTO [t1] DEFAULT Values;
COMMIT TRAN InnerTran;
SELECT COUNT (*) FROM [t1];
INSERT INTO [t2] VALUES (1, ‘references t1’)
ROLLBACK TRAN OuterTran — THIS ROLLBACK COULD NOT BE EXECUTED WITHOUT ROLLING BACK THE INNER TRANSACTION
If you read further down, you’ll see that the statement you quoted was what people think, and I then show that it’s false – the inner transactions *are* rolled back.
Paul – I wanted to say Thanks for your blog. I’m struggling with this very issue right now and it has become very apparent that all the lines of code I’ve written for various ROLLBACK scenarios has been in vain. Time to refactor now that I have a MUCH clearer understanding of how the ROLLBACK processing works!
There are good and bad DBAs and developer’s. I’ve worked with both. Maybe at different times I was both. I go way back to the first version of SQL. I can tell you at that time everybody was a newbie. The fact is the transactions, all of them, often need to roll back. If you are inserting both parent and child records and one of the inserts fails, you do not want to end up with orphaned rows. It is better to know they all roll back and and act accordingly.
Agree! If you begin a transaction, all of the trans within the outer should roll back all the inner and the outer trans. This ensures “parallel processing” in a sense where multiple updates are made, or none are made at all.
So what is the purpose of a “nested” tran then?
In my case, I want to record the set of parameters that were passed to a nested proc, when it errors out.
However, since the calling proc rolls back, the error tracking record insert is rolled back as well.
I’m script kiddie who self-taught himself a bit of VBA, Powershell script and now wanted to help his colleagues in their daily work setting up an SQL server to store and retrieve some data (will need to learn a bit of C# to land a little desk app for a user interface that is more enjoyable than the PowerShell window).
I need to log some ‘outer’ data of files into a table (filename, filesize, upload time etc), retrieve the row id of the inserted row and, in another table, store data within that file.
I wanted to be sure not to have any of the inside data neither the outer to be stored if at the end of the workflow I don’t get all the ‘inside’ data written in the second table (i.e I have 100 points of data inside the file, if for any reason something gets wrong and I store only 99 points I want everything not to be recorded at all in the first table neither in the second).
To do this I started reading about transactions.
I was surely going to use sooner or later nested transaction and thanks you have well explained that they don’t exist in reality (at least they don’t seem to be of any usefulness) I will not go through hours of struggle trying to understand why doesn’t work as expected!
Since I am here, would you tell me it’s the right approach to do in a case like this or there is another pattern?
Again, thanks for sharing!
Yes – you definitely want an explicit transaction that you can roll back (or will be automatically rolled back) if something goes wrong.
Thanks for the great info on this site, and all the helpful examples of how this feature works!
I will intrigued when I saw you article title, then confused so ran some tests and behold nested transactions work exactly the way I thought they did. If I am calling lots of methods in my code each with their own transactions then I would want my outer transaction to be the controlling transaction and hence I can have all my transactions commit until an error at which time I rollback the outer transaction and all the inner ones are rolled back as expected. I would not expect the inner transactions to commit until the controlling method commits so as it works correctly I am not sure why they are evil. Maybe a bit old fashioned these days in the light of distributed transactions but they still work correctly.
Exactly right. But you are in a tiny minority who want and understand that behavior. Most SQL Server devs believe that an inner transaction, once committed, is irrevocably so. And that committing an inner transaction frees up the transaction log space for that inner transaction, which it does not. The problem isn’t the way it actually works, but the incorrect assumptions of the majority. Thanks