{"id":4336,"date":"2014-03-17T19:58:32","date_gmt":"2014-03-18T02:58:32","guid":{"rendered":"http:\/\/3.209.169.194\/blogs\/paul\/?p=4336"},"modified":"2019-12-30T17:30:20","modified_gmt":"2019-12-31T01:30:20","slug":"worrying-cause-log-growth-log_reuse_wait_desc","status":"publish","type":"post","link":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/","title":{"rendered":"What is the most worrying cause of log growth (log_reuse_wait_desc)?"},"content":{"rendered":"<p>Two weeks ago I kicked off a <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/survey-worrying-cause-log-growth\/\" target=\"_blank\" rel=\"noopener noreferrer\">survey<\/a>\u00a0that presented a scenario and asked you to vote for the <em>log_reuse_wait_desc<\/em> value you&#8217;d be most worried to see on a critical database with a 24&#215;7 workload.<\/p>\n<p>Here are the results:<\/p>\n<p><a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-4337\" src=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\" alt=\"logreuse\" width=\"575\" height=\"330\" srcset=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg 575w, https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse-300x172.jpg 300w\" sizes=\"(max-width: 575px) 100vw, 575px\" \/><\/a><\/p>\n<p>Another very interesting spread of responses &#8211; as always, thanks to everyone who took the time to respond.<\/p>\n<p>Remember that you have no knowledge about anything on the system except what the <em>log_reuse_wait_desc<\/em> is for the database with constantly growing log.<\/p>\n<p>Before I start discussing them, I&#8217;m going to explain the term\u00a0<em>log truncation<\/em> that I&#8217;ll be using. This is one of the two terms used (the other is\u00a0<em>log clearing<\/em>) to describe the mechanism that marks currently-active VLFs (virtual log files) in the log as inactive so they can be reused. When there are inactive VLFs, the log can wrap around (see <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/inside-the-storage-engine-more-on-the-circular-nature-of-the-log\/\" target=\"_blank\" rel=\"noopener noreferrer\">this post<\/a> for more details) and doesn&#8217;t have to grow. When there are no available VLFs because log truncation isn&#8217;t possible, the log has to grow. When log truncation can&#8217;t make any VLFs inactive, it records the reason why, and that&#8217;s what the <em>log_reuse_wait_desc<\/em> value in <em>sys.databases<\/em> gives us. You can read more about how the log works in the <a href=\"https:\/\/technet.microsoft.com\/en-us\/library\/2009.02.logging.aspx\" target=\"_blank\" rel=\"noopener noreferrer\">this TechNet Magazine article<\/a> I wrote back in 2009, and get all the log info you could ever want in my <a href=\"https:\/\/www.pluralsight.com\/courses\/sqlserver-logging\" target=\"_blank\" rel=\"noopener noreferrer\">logging and recovery Pluralsight course<\/a>.<\/p>\n<p>We also need to understand how the <em>log_reuse_wait_desc<\/em> reporting mechanism works. It gives the reason why log truncation couldn&#8217;t happen the last time log truncation was attempted. This can be confusing &#8211; for instance if you see <em>ACTIVE_BACKUP_OR_RESTORE<\/em> and you know there isn&#8217;t a backup or restore operation running, this just means that there was one running the last time log truncation was attempted. You can also see some weird effects &#8211; for instance if you do a log backup, and then see <em>LOG_BACKUP<\/em> immediately as the <em>log_reuse_wait_desc<\/em> value. I blogged about the reason for that phenomenon <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/why-is-log_reuse_wait_desc-saying-log_backup-after-doing-a-log-backup\/\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n<p>Let&#8217;s quickly consider what each of the <em>log_reuse_wait_desc<\/em> values in the list above means (and this isn&#8217;t an exhaustive analysis of each one):<\/p>\n<ul>\n<li><em>NOTHING<\/em>: Just as it looks, this value means that SQL Server thinks there is no problem with log truncation. In our case though, the log is clearly growing, so how could we see\u00a0<em>NOTHING<\/em>? Well, for this we have to understand how the <em>log_reuse_wait_desc<\/em> reporting works. The value is reporting what stopped log truncation last time it was attempted. If the value is\u00a0<em>NOTHING<\/em>, it means that at least one VLF was marked inactive the last time log truncation occurred. We could have a situation where the log has a huge number of VLFs, and there are a large number of active transactions, with each one having its <em>LOP_BEGIN_XACT<\/em> log record in successive VLFs. If each time log truncation happens, a single transaction has committed, and it only manages to clear one VLF, it could be that the speed of log truncation is just waaaay slower than the speed of log record generation, and so the log is having to grow to accommodate it. I&#8217;d use <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/script-open-transactions-with-text-and-plans\/\" target=\"_blank\" rel=\"noopener noreferrer\">my script here<\/a> to see how many active transactions there are, monitor VLFs with <em>DBCC LOGINFO<\/em>, and also track the <em>Log Truncations<\/em> and <em>Log Growths<\/em> counters for the database in the <em>Databases<\/em> perfmon object. I need to see if I can engineer this case and repeatably see\u00a0<em>NOTHING.<\/em><\/li>\n<li><em>CHECKPOINT<\/em>: This value means that a checkpoint hasn&#8217;t occurred since the last time log truncation occurred. In the simple recovery model, log truncation only happens when a checkpoint completes, so you wouldn&#8217;t normally see this value. When it can happen is if a checkpoint is taking a long time to complete and the log has to grow while the checkpoint is still running. I&#8217;ve seen this on one client system with a very poorly performing I\/O subsystem and a very large buffer pool with a lot of dirty pages that needed to be flushed when the checkpoint occurred.\u00a0You could also see this if the only checkpoint in the log would be lost if the log was truncated. There must always be a complete checkpoint (marked as beginning and ending successfully) in the active portion of the log, so that crash recovery will work correctly.<\/li>\n<li><em>LOG_BACKUP<\/em>: This is one of the most common values to see, and says that you&#8217;re in the full or bulk_logged recovery model and a log backup hasn&#8217;t occurred. In those recovery models, it&#8217;s a log backup that performs log truncation. Simple stuff. I&#8217;d check to see why log backups are not being performed (disabled Agent job or changed Agent job schedule? backup failure messages in the error log?)<\/li>\n<li><em>ACTIVE_BACKUP_OR_RESTORE<\/em>: This means that there&#8217;s a data backup running or any kind of restore running. The log can&#8217;t be truncated during a restore, and is required for data backups so can&#8217;t be truncated there either.<\/li>\n<li><em>ACTIVE_TRANSACTION<\/em>: This means that there is a long-running transaction that is holding all the VLFs active. The way log truncation works is that it goes to the next VLF (#Y) from the last one (#X) made inactive last time log truncation works, and looks at that. If VLF #Y can&#8217;t be made inactive, then log truncation fails and the <em>log_reuse_wait_desc<\/em> value is recorded. If a long-running transaction has its\u00a0<em>LOP_BEGIN_XACT<\/em> log record in VLF #Y, then no other VLFs can be made inactive either. Even if all other VLFs after VLF #Y have nothing to do with our long-running transaction &#8211; there&#8217;s no selective active vs. inactive. You can use <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/script-open-transactions-with-text-and-plans\/\" target=\"_blank\" rel=\"noopener noreferrer\">this script<\/a> to see all the active transactions.<\/li>\n<li><em>DATABASE_MIRRORING<\/em>: This means that the database mirroring partnership has some latency in it and there are log records on the mirroring principal that haven&#8217;t yet been sent to the mirroring mirror (called the <em>send queue<\/em>). This can happen if the mirror is configured for asynchronous operation, where transactions can commit on the principal before their log records have been sent to the mirror. It can also happen in synchronous mode, if the mirror becomes disconnected or the mirroring session is suspended. The amount of log in the send queue can be equated to the expected amount of data (or work) loss in the event of a crash of the principal.<\/li>\n<li><em>REPLICATION<\/em>: This value shows up when there are committed transactions that haven&#8217;t yet been scanned by the transaction replication Log Reader Agent job for the purpose of sending them to the replication distributor or harvesting them for Change Data Capture (which uses the same Agent job as transaction replication). The job could have been disabled, could be broken, or could have had its SQL Agent schedule changed.<\/li>\n<li><em>DATABASE_SNAPSHOT_CREATION<\/em>: When a database snapshot is created (either manually or automatically by DBCC CHECKDB and other commands), the database snapshot is made transactionally consistent by using the database&#8217;s log to perform crash recovery into the database snapshot. The log obviously can&#8217;t be truncated while this is happening and this value will be the result. See <a href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/do-transactions-rollback-when-dbcc-checkdb-runs\/\" target=\"_blank\" rel=\"noopener noreferrer\">this blog post<\/a> for a bit more info.<\/li>\n<li><em>LOG_SCAN<\/em>: This value shows up if a long-running call to <em>fn_dblog<\/em>\u00a0(see <a href=\"https:\/\/www.sqlskills.com\/search\/?q=fn_dblog\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>) is under way when the log truncation is attempted, or when the log is being scanned during a checkpoint.<\/li>\n<li><em>AVAILABILITY_REPLICA<\/em>: This is the same thing as\u00a0<em>DATABASE_MIRRORING<\/em>, but for an availability group (2012 onward) instead of database mirroring.<\/li>\n<\/ul>\n<p>Ok &#8211; so maybe not such a quick consideration of the various values :-) Books Online has basic information about these values <a href=\"https:\/\/technet.microsoft.com\/en-us\/library\/ms345414(v=sql.105).aspx\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n<p>So which one would *I* be the most worried to see for a 24 x 7, critical database?<\/p>\n<p>Let&#8217;s think about some of the worst case scenarios for each of the values:<\/p>\n<ul>\n<li><em>NOTHING<\/em>: It could be the scenario I described in the list above, but that would entail a workload change having happened for me to be surprised by it. Otherwise it could be a SQL Server bug, which is unlikely.<\/li>\n<li><em>CHECKPOINT<\/em>: For a critical, 24&#215;7 database, I&#8217;m likely using the full recovery model, so it&#8217;s unlikely to be this one unless someone&#8217;s switched to simple without me knowing&#8230;<\/li>\n<li><em>LOG_BACKUP<\/em>: This would mean something had happened to the log backup job so it either isn&#8217;t running or it&#8217;s failing. Worst case here would be data loss if a catastrophic failure occurred, plus the next successful log backup is likely to be very large.<\/li>\n<li><em>ACTIVE_BACKUP_OR_RESTORE<\/em>: As the log is growing, if this value shows up then it must be a long-running data backup. Log backups can run concurrently so I&#8217;m not worried about data loss of a problem occurs.<\/li>\n<li><em>ACTIVE_TRANSACTION<\/em>: Worst case here is that the transaction needs to be killed and then will take a long time to roll back, producing a lot more transaction log before the log stops growing.<\/li>\n<li><em>DATABASE_MIRRORING<\/em>: Worst case here is that a crash occurs and data\/work loss occurs because of the log send queue on the principal, but only if I don&#8217;t have log backups that I can restore from. So maybe we&#8217;re looking at a trade off between some potential data loss or some potential down time (to restore from backups).<\/li>\n<li><em>REPLICATION<\/em>: The worst case here is that replication&#8217;s got itself badly messed up for some reason and has to be fully removed from the database with <em>sp_removedbreplication<\/em>, and then reconfigured again.<\/li>\n<li><em>DATABASE_SNAPSHOT_CREATION<\/em>: The worst case here is that there are some very long running transactions that are still being crash recovered into the database snapshot and that won&#8217;t finish for a while. It&#8217;s not possible to interrupt database snapshot creation, but it won&#8217;t affect anything in the source database apart from log truncation.<\/li>\n<li><em>LOG_SCAN<\/em>: This is very unlikely to be a problem.<\/li>\n<li><em>AVAILABILITY_REPLICA<\/em>: Same as for database mirroring, but we may have another replica that&#8217;s up-to-date (the availability group send queue could be for one of the asynchronous replicas that is running slowly, and we have a synchronous replica that&#8217;s up-to-date.<\/li>\n<\/ul>\n<p>(Edit: There&#8217;s another one that was added in SQL Server 2014: <em>XTP_CHECKPOINT<\/em>. This is where the log cannot truncate until a in-memory checkpoint occurs, which won&#8217;t happen\u00a0until 1.5GB of log has been generated since the last in-memory checkpoint. In 2016, when this occurs,\u00a0<em>log_reuse_wait_desc<\/em> now shows <em>NOTHING<\/em>.)<\/p>\n<p>The most worrying value is going to depend on what is likely to be the biggest problem for my environment: potential data loss, reinitialization of replication, very large log backup, or lots *more* log growth from waiting for a long-running transaction to commit or roll back.<\/p>\n<p>I think I&#8217;d be most worried to see <em>ACTIVE_TRANSACTION<\/em>, as it&#8217;s likely that the offending transaction will have to be killed which might generate a bunch more log, cause more log growth, and more log to be backed up, mirroring, replicated, and so on. This could also lead to disaster recovery problems too &#8211; if a long-running transaction exists and a failover occurs, the long-running transaction has to be rolled back before the database is fully available (even with fast recovery in Enterprise Edition).<\/p>\n<p>So there you have it. 37% of you agreed with me. I discussed this with the team just after I posted it and we agreed on\u00a0<em>ACTIVE_TRANSACTION<\/em> as the most worrying type (so I didn&#8217;t just pick this because the largest proportion of respondents did).<\/p>\n<p>I\u2019m interested to hear your thoughts on my pick and the scenario, but please don\u2019t rant about how I\u2019m wrong or the scenario is bogus, as there is no \u2018right\u2019 answer, just opinions based on experience.<\/p>\n<p>And that\u2019s the trick when it comes to performance troubleshooting \u2013 although it\u2019s an art and a science, so much of how you approach it is down to your experience.<\/p>\n<p>Happy troubleshooting!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Two weeks ago I kicked off a survey\u00a0that presented a scenario and asked you to vote for the log_reuse_wait_desc value you&#8217;d be most worried to see on a critical database with a 24&#215;7 workload. Here are the results: Another very interesting spread of responses &#8211; as always, thanks to everyone who took the time to [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[66,98],"tags":[],"class_list":["post-4336","post","type-post","status-publish","format-standard","hentry","category-performance-tuning","category-transaction-log"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal\" \/>\n<meta property=\"og:description\" content=\"Two weeks ago I kicked off a survey\u00a0that presented a scenario and asked you to vote for the log_reuse_wait_desc value you&#8217;d be most worried to see on a critical database with a 24&#215;7 workload. Here are the results: Another very interesting spread of responses &#8211; as always, thanks to everyone who took the time to [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/\" \/>\n<meta property=\"og:site_name\" content=\"Paul S. Randal\" \/>\n<meta property=\"article:published_time\" content=\"2014-03-18T02:58:32+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-12-31T01:30:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\" \/>\n<meta name=\"author\" content=\"Paul Randal\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Paul Randal\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/\",\"name\":\"What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal\",\"isPartOf\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\",\"datePublished\":\"2014-03-18T02:58:32+00:00\",\"dateModified\":\"2019-12-31T01:30:20+00:00\",\"author\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\",\"contentUrl\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg\",\"width\":575,\"height\":330},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What is the most worrying cause of log growth (log_reuse_wait_desc)?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#website\",\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/\",\"name\":\"Paul S. Randal\",\"description\":\"In Recovery...\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce\",\"name\":\"Paul Randal\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g\",\"caption\":\"Paul Randal\"},\"sameAs\":[\"http:\/\/3.209.169.194\/blogs\/paul\"],\"url\":\"https:\/\/www.sqlskills.com\/blogs\/paul\/author\/paul\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/","og_locale":"en_US","og_type":"article","og_title":"What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal","og_description":"Two weeks ago I kicked off a survey\u00a0that presented a scenario and asked you to vote for the log_reuse_wait_desc value you&#8217;d be most worried to see on a critical database with a 24&#215;7 workload. Here are the results: Another very interesting spread of responses &#8211; as always, thanks to everyone who took the time to [&hellip;]","og_url":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/","og_site_name":"Paul S. Randal","article_published_time":"2014-03-18T02:58:32+00:00","article_modified_time":"2019-12-31T01:30:20+00:00","og_image":[{"url":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg","type":"","width":"","height":""}],"author":"Paul Randal","twitter_misc":{"Written by":"Paul Randal","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/","name":"What is the most worrying cause of log growth (log_reuse_wait_desc)? - Paul S. Randal","isPartOf":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage"},"image":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage"},"thumbnailUrl":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg","datePublished":"2014-03-18T02:58:32+00:00","dateModified":"2019-12-31T01:30:20+00:00","author":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce"},"breadcrumb":{"@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#primaryimage","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg","contentUrl":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-content\/uploads\/2014\/03\/logreuse.jpg","width":575,"height":330},{"@type":"BreadcrumbList","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/worrying-cause-log-growth-log_reuse_wait_desc\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.sqlskills.com\/blogs\/paul\/"},{"@type":"ListItem","position":2,"name":"What is the most worrying cause of log growth (log_reuse_wait_desc)?"}]},{"@type":"WebSite","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#website","url":"https:\/\/www.sqlskills.com\/blogs\/paul\/","name":"Paul S. Randal","description":"In Recovery...","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.sqlskills.com\/blogs\/paul\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/ffcec826c18782e1e0adf173826a7fce","name":"Paul Randal","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.sqlskills.com\/blogs\/paul\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0b6a266bba2f088f2551ef529293001bd73bf026bc1908b9866728c062beeeb6?s=96&d=mm&r=g","caption":"Paul Randal"},"sameAs":["http:\/\/3.209.169.194\/blogs\/paul"],"url":"https:\/\/www.sqlskills.com\/blogs\/paul\/author\/paul\/"}]}},"_links":{"self":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts\/4336","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/comments?post=4336"}],"version-history":[{"count":0,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/posts\/4336\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/media?parent=4336"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/categories?post=4336"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.sqlskills.com\/blogs\/paul\/wp-json\/wp\/v2\/tags?post=4336"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}