What I’m Telling My Clients About SQL Server Denali

My consulting clients don’t have the time to read all the latest blog posts with news about SQL Server, so after a big event, I like to boil down the announcements to a few key paragraphs. I sum up things they care about and try to put it in a business-friendly perspective. Here’s what I just sent my clients about Denali, the next version of SQL Server:

This week at the PASS Summit in Seattle, Microsoft took the wraps off the next version of SQL Server – code name Denali. They gave the community an early preview build and showed us some new features that will have a big impact on how we scale SQL Server, plan high availability and disaster recovery, build BI solutions, and more. There’s a heck of a lot to cover here, and I’m going to try to boil it down to the main things I want you to be aware of when you’re thinking about your 2011 architecture plans and budgets. Microsoft isn’t talking release dates yet, but don’t plan on this coming in the next 6 months. Forgive me for typos in here – I wrote this on my iPad on the plane ride home, so it’s a miracle that any words are spelled correctly.

If You Need To Handle More Load

The new AlwaysOn features will let us scale SQL Server in a wild new way. One server is designated as the primary production server, and all writes happen on that server. Up to four additional servers can be added as secondaries, and we get read-only access to them, including the ability to run our backups on the read-only servers instead of the production box. If the primary production server fails, one of the secondary boxes takes over as the new primary production instance.

This is by far my favorite feature added to SQL Server in a very long time. It helps me solve a lot of your scaling and performance challenges with no disruption to your applications. I’ve been steering developers toward using multiple connection strings: one for writes, one for reads, and one for delayed reads (like historical reporting). expect to hear this from me a lot more over the coming months, because implementing this in your code today will put you in an amazing position when the new SQL Server ships. You’re going to be able to solve your very toughest performance challenges by writing checks instead of writing code – and sometimes that’s a better answer.

If You Need Easier High Availability & Disaster Recovery

The AlwaysOn features I mentioned above also have a lot of neat tricks in store for HA/DR. In the past, I’ve had to architect solutions requiring multiple pieces like clustering, database mirroring, log shipping, and replication. AlwaysOn brings the best of these different technologies into one feature and can replace all of them in certain environments.

A typical AlwaysOn HA/DR environment will probably look like this:

  • Server A – the production server
  • Server B – nearby server in the same datacenter with synchronous updates
  • Server C – nearby server with asynchronous updates, so it may be a few minutes behind, and users connect here to run reports
  • Server D – disaster recovery server in another datacenter with asynchronous updates. If the primary datacenter completely fails, this is our backup plan.

You might be doing something like this already using a variety of technologies, but now we’re going to be able to fail over multiple databases in groups, fail over the associated logins and jobs, and manage it all under one pane of glass in a way that won’t intimidate non-DBAs.

If You’re Considering Different Data Warehouse Vendors

If you’ve been tempted to switch to one of the in-memory or columnar-based databases that promise blazing speed for huge data warehouses, Microsoft is taking a big step to keep you around. They’ve added columnar indexes, which is the technology that enables multi-billion-row table scans to happen in seconds instead of minutes. This new index type is aimed at data warehouses, not OLTP systems, and it’s best suited for systems that do batch loading and partitioning. If you’ve got a database with nightly loads, this index type is a good reason to consider implementing partition switching for ETL loads due to the way the columnar index is created and updated.

If You Need To Build BI Cubes

Microsoft showed their roadmap, and there’s some vibrant discussion in the BI community about what it means. At first glance, it appears that Microsoft is gradually abandoning SQL Server Analysis Server cubes in favor of the technology behind PowerPivot, their Excel-based tool with in-memory databases stored differently than either traditional databases or SSAS cubes. Blogger Chris Webb summed it up well, and Microsoft’s Amir Netz posted his spin on it in the comments at https://cwebbbi.wordpress.com/2010/11/11/pass-summit-day-2/.

What this means for you is that if you’re using SSAS today, you might want to take a breather to examine the new technologies. We’re at an awkward time – the new stuff isn’t out yet and will take some time to be able to handle big analysis tasks. The old stuff still works great, but the writing is on the wall, and there doesn’t appear to be an easy way to migrate your BI infrastructure to this new storage methodology. It reminds me of Microsoft’s transition from DTS to SSIS in 2005 – they didn’t give us a good upgrade path, and as a result companies had to pour resources into converting their DTS packages to SSIS by hand. Many of my clients just left their ETL processes in DTS rather than reinvest, and they’re still working fine. This may be a good approach for your SSAS cubes for the next 12-24 months too – wait it out rather than panicking just yet. It also reminds me of Microsoft’s recent bugaboo with Silverlight. They said they were going to migrate mobile apps to HTML5 because Apple’s devices (with their huge tablet market share) won’t support Silverlight anytime soon. Some people read that as, “Silverlight is dead,” at which point Microsoft had to do a ton of backtracking. It’s still not clear what the real answer is.

If You Need to Store Files in SQL Server

If your apps need to upload documents or get transactional consistency for those documents, your options have been pretty crappy in past versions of SQL Server. I’ve historically told folks to avoid storing binaries like Word docs, images, and XML files inside the database – rather, store pointers to files on a Windows DFS file share instead. Well, now we’ve got another option – FileTable – but I’m not convinced that this isn’t crappy yet either. In theory, you can simply enable this feature, set up a file share on the SQL Server, and any files written to that share will be stored in the database. Imagine running an UPDATE statement in SQL and changing attributes of files – pretty powerful stuff.

Readers with gray hair like me might remember the WinFS and Cairo stuff from about a decade ago when Microsoft announced that SQL Server would be the storage engine for Windows. That never came to fruition, but the code and the staff stuck around, and today they’re rising from the dead like a zombie. There’s a chance this feature will work, but the DBA in me wants to hit it in the head with a shovel. I know some of you are going to want this, though, so I’m going to investigate it as they build it out, and I’ll give it an open mind. (Braaaaains…)

If You Want to Be a Microsoft Certified Master

Big news here! For the last several months, my blogging frequency has gone way down because I’ve been working with Microsoft, my cohorts at SQLskills, and a handful of industry experts to help reinvent the MCM program. This week I’m finally able to share the fruits of our labor.

To get my MCM, I had to spend 3 weeks onsite at Microsoft in Seattle in order to take several exams and a lab. Now, the training and the tests have been split out. Candidates take a multiple-choice exam at Prometric, and if they pass, they can take the 6-hour hands-on qualifying lab to become a Master! It’s just that easy! Okay, well, the test and the lab aren’t exactly easy, but now I expect many more highly qualified and experienced SQL Server pros to give it a shot. You could try to take the tests cold, but at $500 per test and $2,000 for the lab, it’s not a wise bet. Plus, each retake costs the same, and there’s a 90-day waiting period between retakes.

To increase your chances of passing, you’re going to want to fill in your gaps with training, and that’s where SQLskills comes in. Paul and Kim’s one-week Immersions training has always been roughly the equivalent of the first week of MCM training – if you’ve already taken that, you’re one third of the way there. We’re announcing new Immersions training events in 2011 that will cover the remainder of the MCM training like storage, tuning, clustering, CLR, XML, security, and more. Pricing for these events will be roughly the same as our existing Immersions events – figure around $3,000 per week plus travel & expenses.

If you’re already an expert in a particular subject area, maybe you want to try self-studying by watching the videos that Bob, Kim, Paul, and I produced for Microsoft. You can watch them for free here to get an idea of the kind of depth you need to have on each topic:

http://technet.microsoft.com/en-us/sqlserver/ff977043.aspx

If you’re considering a run at the MCM, let me know and I can help you assess what studying route you should take.

If You Want More Information

It’s way too early to talk training – in fact, I scheduled the entire month of December off so I could write MCM and SQL training for next year – but it’s the right time to talk long term plans. I’m doing 2-4 hour architectural briefings to talk about what the next version means to you, your development plans, and your hardware budgets for next year. It’s probably too late to make any dramatic budget changes for FY2011, but if you’ve got any wiggle room, you might want to get a quick heads-up. To schedule one of these, drop me an email.

4 thoughts on “What I’m Telling My Clients About SQL Server Denali

  1. Interesting that you’re espousing multiple connectionstrings for different types of actions(eg writes,reads, etc), what would be the benefit of that?

    Thanks

  2. Depending on the type of activity the app needs to do, we can scale them out to different servers. For example, when an app needs to run reporting queries on data that can stand to be more than 5-10 minutes old, we can use a database mirroring snapshot, a log shipping subscriber, or a replicated database.

  3. Brent, I’ll bring my sawed off shotgun to accompany your shovel!

    (braiiiinnnnnsss ~ I said STAY DOWN)

    Seriously, though, I hope it can work out as intendend but looks like we have similar fears of how the FileTable will be abused.

  4. Michael – hmm, I’ve never tried with Windows auth. I’m usually a fan of SQL authentication to avoid problems when a domain controller drops offline. I’ve had some bad experiences with Windows auth on improperly configured domains. Therefore, I haven’t come across this question at clients, but I’d be interested to hear how your experiences work.

Comments are closed.

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.