Transactional Replication Use Cases

This blog post is a part of a series on SQL Server Transactional Replication.  If you want to find the other posts in the series, check out the main page for the series here.

Why Transactional Replication?

Why in the world would you want to use Transactional Replication?  Isn’t that the thing that is impossible to manage, difficult to configure, and always prone to problems? As a consultant, I see a lot of scenarios where every problem becomes a nail because all you have is a hammer. Sometimes another technology can provide a solution to problems that exist but little is known about the technology, and Transactional Replication tends to fall into this scenario in my experience. In this post we are going to take a look some of the more common Transactional Replication use cases as well as some scenarios where it can be used to solve a business problem in a different way. As with anything, Transactional Replication is just another tool to have in your toolbox.

Offloading Read-only Workloads

Yes today we have Availability Groups and Readable Secondary capabilities, and I’ve already written a blog post Availability Group Readable Secondaries – Just Say No that outlines the challenges you might have with that approach. If you don’t need the entire database for read-only queries, you can minimize the data-set sizes by using transactional publications for only those articles/tables that are needed and keep the size of the readable copies as small as possible. You can also index those specifically for the readable workload and don’t have to maintain excess indexes on the writable copy in the publisher. The best thing about Transactional Replication is that it doesn’t require Enterprise Edition licenses to use on the subscribers, they can be Standard Edition as long as the feature set you need and hardware limitations of Standard Edition apply.

Reporting and ETL operations

Reporting and ETL operations can be taxing on source systems due to the potential volumes of data that might be being read. By moving these operations off to a transactional subscriber, it is possible to remove that load entirely from the production publisher. In addition to this, it is possible to customize the subscriber side with auditing/logging triggers to mark records that have been changed to allow for targeted incremental ETL operations that only handle the most recently changed data. In some cases Indexed Views can be created on the Subscriber database to pre-aggregate the data and reduce the size of the data set that has to be processed, improving reporting performance times without blowing up the size of the primary publisher database or impacting it’s performance.  However, you need to be careful with this and test changes to the subscriber for impacts to the distribution agents ability to replicate changes with low latency.  More than once I have worked with a client having replication performance issues where the root of the problem was due to Indexed Views, Full-Text Indexing and other additions that were made to the subscriber database without testing their impacts.

Isolating client data in a multi-tenant design

Multi-tenant databases are fairly common in today’s IT infrastructure, especially with many vendors providing software as a service type solutions for clients, or even having clients that support multiple locations with a single database solution. When a client needs a copy of all of their data for purposes outside of the normal application use, splitting that data off can be a timely process and is usually accomplished using ETL operations that build the same database schema, but then load 100% of the single clients data into that database to provide a backup copy of just that clients information. If this is a routine operation, moving 100% of the data each time is inefficient and creates impacts to the production database that can be removed through a filtered transactional publication. With a subscriber database for each filtered client, the time to provide a new backup copy of the data is just the time it takes to make a backup of the subscriber database, simplifying the process and making it much faster.

Generating Testing Databases

Many large production databases are commonly backed up and restored to test environments for development purposes, but in some cases, there are auditing and other tables that are not needed in the development environment and these can take a significant amount of storage on disk. I recently worked with a client database that was 10TB in size due to internal auditing tables that held nearly 8TB of the data that were immediately truncated on restore to a development server. However, this required 10TB of storage to be available just to get the backup restored. To solve this, a transactional publication of every table but those Audit tables was created to a subscriber that then became the source for development backups to be restored from. Once restored the database only took 2TB of space and allowed 30TB of storage to be recovered from the test environment for other uses since it had only been provisioned to allow the restores to happen and then get truncated.  Now before you say those tables should be in a separate database, that could be the case but it’s not how things are currently deployed or implemented for this client and you can’t have referential integrity across databases for the auditing tables.  They just don’t need the audit data to exist in a testing environment during testing, and this is an alternative means of saving a lot of storage given their current implementation.

Transactional Replication as an HA Strategy?

This one can certainly generate some debate, and I’m not the first person to blog about using Replication as a part of a HA strategy with SQL Server, Paul has actually blogged about it here.  However, depending on the business requirements, the design of the application, and the SLA’s, replication can absolutely be used as a part of a HA strategy for a database.  My favorite deployment of replication for HA purposes is for read intensive workloads using multiple subscribers behind a network load balancer like a Big IP F5 or Citrix Netscaler.  You can even leverage features like Datastream to do Content Switching and route connections to specific SQL Servers based on the data contained in the TDS stream.  While transactional replication alone is not a total HA solution, it can absolutely be a part of a HA strategy for SQL Server.

2 thoughts on “Transactional Replication Use Cases

  1. It’s nice to see this post, Jonathon. Replication has had a bad reputation in the last 5 or so years. Nice to see you acknowledging some of its benefits and use cases. The ability to modify the subscriber DBs (e.g. different indexes) stands out for me. AGs and log shipped DBs don’t have that.

  2. While working for a managed services company I supported transactional replication for an American professional sports league. Updates were made at the various clubhouses and replicated back to the main corporate database for reporting purposes. I like to visualize it as a hub and spoke with each clubhouse as a spoke and the centralized corporate database as the hub. To this day I don’t think any other SQL Server technology would be a better design than transactional replication for this particular use case.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.