Last week in Azure SQL Database – Part 4 – Performance Levels

In this post I’ll address perhaps the most important of all the announcements, performance levels.

The 6 new ASD tiers provide different levels of performance and different resource guarentees/reservations. There is a chart here: http://msdn.microsoft.com/library/azure/dn741340.aspx that lists performance levels (among other things) and there is a different chart here: http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx that gives more details on performance, predictability, and resource guarantees (like max worker threads and max sessions).

The part concerning predictability is useful because ASD servers (physical servers in datacenters, not SQL Server master database they call “servers”) are shared. Because servers are shared, there is a syndrome that affects predictability called the “noisy neighbor” syndrome. Imagine someone who shared a physical server with you is performing a database stress test…maybe during your business’ peak time…

Performance is defined in a new, curious, unit of measure called a DTU or Database Throughput Unit. DTUs are meant to allow comparison between the new tiers. Currently, there is no direct comparison between the new tiers and the old (Web/Business) tiers, possibly because there was no performance SLA in the old tiers at all.

DTUs are based on throughput with a described benchmark (The Azure SQL Database Benchmark, see http://msdn.microsoft.com/en-us/library/azure/dn741327.aspx). This benchmark and how they run it are described in a nice level of detail. However, it would be better if the source code and running instructions could be released in future. Unless it’s already been released, and I missed it.

For now, the DTU is a nice way to compare the tiers and a known benchmark is a nice thing but, to reiterate, there’s no way to ensure that you’re getting your bang-per-buck. And remember, at this point, any “smoke tests” you do on your own are being performed against a preview, not production. With Basic, sometimes it seems to take “a long while” (a nebulous term) to connect the first time to a new Basic tier database. After that, it’s faster (another nebulous term). Others have reported that Basic/Standard is slower than Web/Business on self-invented performance test. It would be nice, before the new tiers go GA, if they run the benchmark on a traditional Web/Business database (maybe a few times and take the average, but post all detail runs) just to assuage the fears of folks before they need to convert to the new tiers. MHO… After everyone’s converted, we can start talking about DTUs again, and they become more interesting and meaningful.

BTW, to get information about your databases (new or old tiers) in T-SQL, just use this query. It has all of the tier and objective information. There is some redundancy in the metadata, so start with SELECT * and choose the information you’d like to see:

select o.name, o.description, st.*, do.*
from slo_objective_setting_selections sl
join slo_service_objectives o
on sl.objective_id = o.objective_id
join slo_dimension_settings st
on sl.setting_id = st.setting_id
join slo_database_objectives do
on o.objective_id = do.current_objective_id
order by o.name;

Cheers, @bobbeauch

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.