Very large tables are large because they’re holding data that’s not only current but historical. Companies are keeping more data around longer and wanting to better evaluate that data for trends over time. These larger-and-larger data sets become difficult to deal with as there are many distinct and often conflicting access patterns, from the critical “hot” data needing to be inserted, to the somewhat recent “read-mostly” data that’s still fairly active, to the older data that’s rarely used but still needed. All of this is “sales” data and it seems appropriate to have it stored in our “sales” table – but how do we do this efficiently and effectively so that modifications do not affect queries and queries do not affect modifications. Worse yet, maintenance is needed for the volatile data but not the older / static data so how can we do this on our large tables when really only a small portion of the table needs maintenance.
But, it’s really a lot more than this. There have long been features (partitioned tables since SQL Server 2005, and partitioned views since SQL Server 7.0) that tackle various aspects of the “VLT” problem but each feature has problems in and of itself. What this class does is directly address all of the problems to show you a realistic and powerful architecture that gives you the best of all worlds: scalability, maintainability, and high-availability.
This class is delivered live via online streaming.
Instructor: Kimberly L. Tripp.
For a detailed agenda click HERE.
If you need help justifying training to your organization, we can help you:
Click HERE to go to our events registration page, and please make sure to select the correct class from the drop-down menu at the top of the page.
If you have any questions, please contact us.