New class: Immersion Event on Clustering and Availability Groups

We have a third exciting new class debuting this October in Chicago: Immersion Event on Clustering and Availability Groups.

It’s a 2-day class, taught by Jonathan Kehayias, our resident expert on all things AlwaysOn. We’ve seen a huge surge in clients using FCIs and AGs, so this class will be really useful to many organizations, and is a partial replacement for our previous 5-day IEHADR class.

The modules are as follows:

  • Windows Server Failover Clustering Essentials
  • AlwaysOn Failover Clustered Instances
  • AlwaysOn Availability Groups
  • Implementation Case Studies

You can read a more detailed curriculum here and all the class registration and logistical details are here.

We hope to see you there!

SQLskills SQL101: Why is restore slower than backup

As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One question I get asked every so often is why it can sometimes take longer to restore a database from a full backup than it took to perform the backup in the first place. The answer is that in cases like that, there’s more work to do during the restore process.

A full backup has the following main phases:

  1. Perform a checkpoint.
  2. Read all in-use data from the data files (technically, reading all allocated extents, regardless of whether all 8 pages in the extent are in use).
  3. Read all transaction log from the start of the oldest uncommitted transaction as of the initial checkpoint up to the time that phase 2 finished. This is necessary so the database can be recovered to a consistent point in time during the restore process (see this post for more details).
  4. (Optionally test all page checksums, optionally perform backup compression, and optionally perform backup encryption).

A full restore has the following main phases:

  1. Create the data files (and zero initialize them if instant file initialization is not enabled).
  2. Copy data from the backup to the data files.
  3. Create the log file and zero initialize it. The log file must always be zero initialized when created (see this post for more details).
  4. Copy transaction log from the backup to the log file.
  5. Run crash recovery on the database.
  6. (Optionally test all page checksums during phase 2, perform decompression if the backup is compressed, and perform decryption if the backup is encrypted.)

Phase 3 above can often be the longest phase in the restore process, and is proportional to the size of the transaction log. This is done as a separate phase rather than being done in parallel with phases 1 and 2, and for a deep investigation of this, see Bob Ward’s recent blog post.

Phase 5 above might be the longest phase in the restore process if there were any long-running, uncommitted transactions when the backup was performed. This will be even more so if there are a very large number of virtual log files (thousands) in the transaction log, as that hugely slows down the mechanism that rolls back uncommitted transactions.

Here’s a list of things you can do to make restoring a full backup go faster:

  • Ensure that instant file initialization is enabled on the SQL Server instance performing the restore operation, to avoid spending time zero-initializing any data files that must be created. This can save hours of downtime for very large data files.
  • Consider backup compression, which can speed up both backup and restore operations, and save disk space and storage costs.
  • Consider using multiple backup files, each on a separate volume. SQL Server will recognize this situation and use parallel write threads (one per volume) to write to the files during the backup, and to read from them during the restore – speeding things up. If you have multiple database data files, a similar I/O parallelism will occur – providing even more of a speed boost.
  • Try to avoid having long-running transactions that will take time to roll back.
  • Manage your transaction log to avoid having an excessive number of virtual log files, so if there are transactions to roll back, the roll back will go as fast as possible. See this blog post for more details.

Hope this helps!

Waits library now has infographics from SentryOne monitored instances

A few years ago I realized that there was a huge gap in knowledge in the SQL Server community – what do all the various wait types mean? – so I started a labor-of-love project to document all wait types and latch classes that have existed from SQL Server 2005 onward. In May 2016, I released the SQLskills Waits Types and Latch Classes Library, and I updated all my waits-related scripts to have auto-generated URLs into the library to help people troubleshooting performance issues. All 898 waits and 185 latches through SQL Server 2016 are in the library, with detailed information on 303 waits and 32 latches so far.

However, one thing missing from the library has been an indication of whether a particular wait is rare or whether it’s one that nearly everyone is likely to see on their instances. So I worked with my good friend Greg Gonzalez, the CEO of SentryOne (formerly known as SQL Sentry, and a long-time partner company with SQLskills), on some ideas about using their data warehouse of anonymous performance metrics from the many thousands of instances of SQL Server that their tools monitor.

The upshot of those discussions and recent work is that today we’re announcing that all wait types in the library have a new infographic that shows how prevalent each wait is.

Below is a screenshot of the infographic for the CXPACKET wait:

On the horizontal axis is a scale (switchable between linear and logarithmic) of what percentage of instances (monitored by SentryOne) experienced this wait over the previous calendar month, and on the vertical axis is the percentage of time that those instances that experienced that wait actually had a thread waiting for that wait type.

What does this all mean? You can now get a feel for whether you’re experiencing something rare or very commonplace.

What’s even better is that the infographics are interactive in the library – you can click on any of the waits shown and be taken to its page.

I think this is a really useful addition to the library and I’m very grateful to SentryOne for making this data available to the community!

Check out the upgraded library at https://www.sqlskills.com/help/waits/.

PS Many thanks to Jim Benton and Melissa Coates from SentryOne for building the infographics and the back-end data source, and to our own Jonathan Kehayias for helping me integrate the infographics into the library.

SQLskills SQL101: Query plans based on what’s in memory

As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One of the topics that I discuss in class today is why the query optimizer doesn’t know (or care) what’s in the buffer pool. (The query optimizer is the part of the query processor that’s responsible for compiling an efficient query plan, and the buffer pool is the cache of database data file pages that are in memory.)

Let’s investigate…

Scenario

Here’s a scenario:

  • Table T has two nonclustered indexes, A and B, that both cover query Q (a simple SELECT query)
  • Query Q will require a complete index scan of either index
  • Index A has 10,000 pages at its leaf level
  • Index B has 50,000 pages at its leaf level

Which index will the query optimizer use when compiling the query plan?

Cost-based…

SQL Server uses a cost-based optimizer, which uses various metrics and statistics to determine the most efficient query plan for the query (given the time limits imposed on its search of the space of all possible query plans). The ‘cost’ in ‘cost-based’ means that it considers the CPU cost and I/O cost of the various operators in the query plan, with the I/O cost essentially being relative to the number of physical reads required. And it assumes that nothing is in memory.

In the scenario above, the optimizer will choose a query plan using index A, as the most efficient plan will be the one involving the fewest phsyical reads and with such a large difference between the page counts of indexes A and B, index A will be chosen for sure.

Hypothetical memory-based…

Now let’s allow a hypothetical optimizer to base its plan choice on what’s in the buffer pool.

If index A is mostly not in the buffer pool and index B is mostly in the buffer pool, it would be more efficient to compile the query plan to use index B, for a query running at that instant. Even though index B is larger, and would need more CPU cycles to scan through, physical reads are waaaay more expensive (in terms of elapsed time) than CPU cycles so a more efficient query plan is the one that minimizes the number of physical reads.

This argument only holds, and a ‘use index B’ query plan is only more efficient than a ‘use index A’ query plan, if index B remains mostly in memory, and index A remains mostly not in memory. As soon as the relative proportions of indexes A and B that are in memory become such that the ‘use index A’ query plan would be more efficient, the ‘use index B’ query plan is the wrong choice.

The situations when the compiled ‘use index B’ plan is less efficient than the cost-based ‘use index A’ plan are (generalizing):

  • Indexes A and B are both memory resident: the compiled plan will use roughly 5 times more CPU than the optimal plan, as there are 5 times more pages to scan.
  • Neither index is memory resident: the compiled plan will do 5 times the number of physical reads AND use roughly 5 times more CPU.
  • Index A is memory resident and index B isn’t: all physical reads performed by the plan are extraneous, AND it will use roughly 5 times more CPU.

This means that the ‘use index B’ plan is really only the optimal plan at the time the query was compiled.

So although a hypothetical optimizer could make use of buffer pool contents knowledge to compile a query that is the most efficient at a single instant, it would be a very dangerous way to drive plan compilation because of the potential volatility of the buffer pool contents, making the future efficiency of the cached compiled plan highly unreliable.

And I also haven’t mentioned the extra cost of maintaining buffer pool contents knowledge in real time, and then potentially having to recompile queries that are now deemed to be inefficient because buffer pool contents have changed.

Summary

Although it doesn’t always get it right, the optimizer strives to produce the most efficient plan, assuming nothing is in the buffer pool. Understanding how the query optimizer comes to plan choice decisions is extremely useful for understanding query plans themselves and relating them to the code driving the plan.

Hope you found this helpful!

How are default column values stored?

An interesting question came up in class yesterday: how is a default column value stored, and what if some rows exist when a column is added and then the default value changes?

An example scenario is this:

  • Step 1: Create a table with two columns
  • Step 2: Add 10 rows
  • Step 3: Add a third column to the table with a non-null default
  • Step 4: Drop the default for the third column
  • Step 5: Add a new default for the third column

And selecting the initial 10 rows can be demonstrated to return the 3rd column using the initial default set in step 3. (It makes no difference if any rows are added between steps 3 and 4.)

This means that there *must* be two default values stored when a new column is added: one for the set of already-existing rows that don’t have the new column and one for any new rows. Initially these two default values will be the same, but the one for new rows can change (e.g. in steps 4 and 5 above) with breaking the old rows. This works because after the new column is added (step 3 above), it’s impossible to add any more rows that *don’t* have the new column.

And this is exactly how it works. Let’s investigate!

Firstly I’ll create a simple database and test table and insert 10 rows. I’ll use the simple recovery model so I can clear the log by doing a checkpoint:

CREATE DATABASE [Company];
ALTER DATABASE [Company] SET RECOVERY SIMPLE;
GO
USE [Company];
GO

-- Create a test table to use
CREATE TABLE [Test] ([c1] INT IDENTITY, [c2] INT AS ([c1]));
GO
INSERT INTO [Test] DEFAULT VALUES;
GO 10
SELECT * FROM [Test];
GO
c1          c2
----------- -----------
1           1
2           2
3           3
4           4
5           5
6           6
7           7
8           8
9           9
10          10

Now I’ll clear the log, add the third column with the default, and look to see which system tables were added to because of addition:

CHECKPOINT;
GO

-- Add column with default value
ALTER TABLE [Test] ADD [c3] CHAR (6) NOT NULL CONSTRAINT [OriginalDefault] DEFAULT 'BEFORE';
GO

SELECT [AllocUnitName] FROM fn_dblog (NULL, NULL)
WHERE [Operation] = 'LOP_INSERT_ROWS';
GO
AllocUnitName
-----------------------
sys.syscolpars.clst
sys.syscolpars.nc
sys.sysrscols.clst
sys.sysseobjvalues.clst
sys.sysschobjs.clst
sys.sysschobjs.nc1
sys.sysschobjs.nc2
sys.sysschobjs.nc3
sys.sysobjvalues.clst

Cool. These system tables have the following functions:

  • sys.syscolpars: table column definitions (relational metadata)
  • sys.sysrscols: rowset column definitions (Storage Engine metadata – info to allow interpretation of record structures on pages)
  • sys.seobjvalues: various Storage Engine values of different uses
  • sys.sysschobjs: relational objects (e.g. tables, constraints)
  • sys.sysobjvalues: various relational values of different uses

I’m going to look in these to find how the inserted rows relate to our table. I’m going to need a few IDs first (using my handy procedure to do that):

EXEC sp_allocationmetadata N'Test';
GO
Object Name Index ID Partition ID      Alloc Unit ID     Alloc Unit Type First Page Root Page First IAM Page
----------- -------- ----------------- ----------------- --------------- ---------- --------- --------------
Test        0        72057594040549376 72057594045792256 IN_ROW_DATA     (1:247)    (0:0)     (1:288)

And now I can query those system tables. Note that they’re ‘hidden’ system tables so you can’t query them unless you connect using the Dedicated Admin Connection. Easiest way to do that is to prefix your SSMS connection string with ‘admin:’ (and if you’re connecting to a remote server, the server needs to have sp_configure option remote admin connections enabled). Make sure you use the correct database after connecting, as the DAC drops you into master.

SELECT * FROM sys.syscolpars WHERE [id] = OBJECT_ID (N'Test');
GO
id        number colid name xtype utype length prec scale collationid status maxinrow xmlns dflt        chk idtval
--------- ------ ----- ---- ----- ----- ------ ---- ----- ----------- ------ -------- ----- ----------- --- ---------
245575913 0      1     c1   56    56    4      10   0     0           5      4        0     0           0   0x0A000000010000000100000000
245575913 0      2     c2   56    56    4      10   0     0           209    4        0     0           0   NULL
245575913 0      3     c3   175   175   6      0    0     872468488   3      6        0     261575970   0   NULL

These are the relational definitions of the columns in the table, and you can see that c3 is listed as having a default constraint, with ID 261575970.

SELECT * FROM sys.sysschobjs WHERE [id] = 261575970;
GO
id          name            nsid nsclass status type pid       pclass intprop created                 modified                status2
----------- --------------- ---- ------- ------ ---- --------- ------ ------- ----------------------- ----------------------- -------
261575970   OriginalDefault 1    0       131072 D    245575913 1      3       2017-04-26 13:37:42.463 2017-04-26 13:37:42.463 0

This is the constraint named OriginalDefault with type D (default) and the default value has ID 245575913.

SELECT * FROM sys.sysobjvalues WHERE [objid] = 261575970;
GO
valclass objid     subobjid valnum value imageval
-------- --------- -------- ------ ----- ----------------------
1        261575970 0        0      2     0x28274245464F52452729

And the imageval column has the default value as hex-encoded ASCII values. Using the ASCII map on Wikipedia, the value is (‘BEFORE’), including the parentheses.

So that’s the default value for new rows. What about the default value for rows that already exist?

SELECT * FROM sys.sysrscols WHERE [rsid] = 72057594040549376;
GO
rsid              rscolid hbcolid rcmodified ti   cid       ordkey maxinrowlen status offset nullbit bitpos colguid
----------------- ------- ------- ---------- ---- --------- ------ ----------- ------ ------ ------- ------ -------
72057594040549376 1       1       10         56   0         0      0           128    4      1       0      NULL
72057594040549376 2       2       10         56   0         0      0           128    8      2       0      NULL
72057594040549376 3       3       0          1711 872468488 0      0           640    12     3       0      NULL

These are the Storage Engine definitions of the columns in the table. The status value indicates that the value may not be in the row, and where to get the default value from.

SELECT * FROM sys.sysseobjvalues WHERE [id] = 72057594040549376;
GO
valclass id                subid valnum value  imageval
-------- ----------------- ----- ------ ------ --------
1        72057594040549376 3     0      BEFORE NULL

And there is the Storage Engine storage for the default value for the c3 column for those rows that existed before c3 was added.

Now I’ll checkpoint, drop the default constraint, and look to see what happened in the log:

CHECKPOINT;
GO

ALTER TABLE [Test] DROP CONSTRAINT [OriginalDefault];
GO

SELECT [AllocUnitName] FROM fn_dblog (NULL, NULL)
WHERE [Operation] = 'LOP_DELETE_ROWS';
GO
AllocUnitName
---------------------
sys.sysobjvalues.clst
sys.sysschobjs.nc1
sys.sysschobjs.nc2
sys.sysschobjs.nc3
sys.sysschobjs.clst

So that’s the relational default value being deleted, in the reverse order from how it was added. Note that the Storage Engine default value wasn’t deleted.

Now I’ll create a new default constraint for the c3 column:

CHECKPOINT;
GO

ALTER TABLE [Test] ADD CONSTRAINT [NewDefault] DEFAULT 'AFTER' FOR [c3];
GO

SELECT [AllocUnitName] FROM fn_dblog (NULL, NULL)
WHERE [Operation] = 'LOP_INSERT_ROWS';
GO
AllocUnitName
---------------------
sys.sysschobjs.clst
sys.sysschobjs.nc1
sys.sysschobjs.nc2
sys.sysschobjs.nc3
sys.sysobjvalues.clst

And doing the various queries again gets us to the new relational column stored default value of (‘AFTER’), including the parentheses.

So just to prove what I said before investigating, I’ll add ten new rows, which will have the c3 value AFTER, and then query the table and I’ll see that the initial ten rows that don’t have c3 in will be given the original default value of BEFORE:

INSERT INTO [Test] DEFAULT VALUES;
GO 10

SELECT * FROM [Test];
GO
c1          c2          c3
----------- ----------- ------
1           1           BEFORE
2           2           BEFORE
3           3           BEFORE
4           4           BEFORE
5           5           BEFORE
6           6           BEFORE
7           7           BEFORE
8           8           BEFORE
9           9           BEFORE
10          10          BEFORE
11          11          AFTER 
12          12          AFTER 
13          13          AFTER 
14          14          AFTER 
15          15          AFTER 
16          16          AFTER 
17          17          AFTER 
18          18          AFTER 
19          19          AFTER 
20          20          AFTER 

Hope you found this interesting! (And don’t forget to drop your DAC connection.)

October 2017 classes in Chicago open for registration

I’ve just released our final set of 2017 classes for registration, including the new two-day class on Azure, the new three-day class on upgrading and migrating, and the new two-day class on clustering and availability groups.

Our classes in October will be in Chicago, IL:

  • IEPTO1: Immersion Event on Performance Tuning and Optimization – Part 1
    • October 2-6
  • IESSIS1: Immersion Event on Learning SQL Server Integration Services
    • October 2-6
  • IE0: Immersion Event for Junior/Accidental DBAs
    • October 2-4
  • ** NEW ** IECAG: Immersion Event on Clustering and Availability Groups
    • October 5-6

    IEPTO2: Immersion Event on Performance Tuning and Optimization – Part 2

    • October 9-13
  • IEPS: Immersion Event on PowerShell for SQL Server DBAs
    • October 9-11
  • ** NEW ** IEAzure: Immersion Event on Azure SQL Database and Azure VMs
    • October 9-10
  • ** NEW ** IEUpgrade: Immersion Event on Upgrading SQL Server
    • October 11-13

You can get all the logistical, registration, and curriculum details by drilling down from our main schedule page.

We hope to see you there!

SQLskills SQL101: Restoring to an earlier version

As Kimberly blogged about recently, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One of the questions I get asked every so often is whether it’s possible to attach or restore a database to an earlier version of SQL Server. Usually the explanation behind the question is that the person accidentally attached the only copy of their database to a newer version than they wanted, or they were just trying out a pre-release version and now want to put their database back into their production system.

So is this possible? The very simple answer is: No.

SQL Server is down-level compatible but is not up-level compatible. This means you can take a database from an earlier version and attach/restore it to a newer version (I explained about this in a post here), but you can’t go backwards to an earlier version.

Why is this the case?

Upgrade steps

An upgrade, whether intentional or accidental, is a one-way operation and it is extremely difficult to reverse its effects. When you upgrade between versions of SQL Server, a series of upgrade steps are performed on the database. Each step usually involves some physical changes to the database, and each step increases the physical version number of the database.

For example, one of the major changes performed when a database was upgraded from SQL Server 2000 to SQL Server 2005 (yes, old and unsupported, but an easy-to-explain example) was to change the structure of the database’s system catalogs (often called the system tables or database metadata) that hold various metadata about tables, indexes, columns, allocations, and other details regarding the relational and physical structure of the database.

As each of these upgrade steps is performed, the database version number is increased. Here are some examples:

  • SQL Server 2016 databases have version number 852
  • SQL Server 2014 databases have version number 782
  • SQL Server 2012 databases have version number 706
  • SQL Server 2008 R2 databases have version number 661

This version number allows SQL Server to know the last upgrade step performed on the database, and whether the in-use SQL Server version can understand the database being attached/restored.

Here’s an example of restoring a SQL Server 2012 database to a SQL Server 2014 server:

RESTORE DATABASE [Company2012]
FROM DISK = N'D:\SQLskills\Company2012_Full.bak'
WITH REPLACE;
GO
Processed 280 pages for database 'Company', file 'Company' on file 1.
Processed 3 pages for database 'Company', file 'Company_log' on file 1.
Converting database 'Company' from version 706 to the current version 782.
Database 'Company' running the upgrade step from version 706 to version 770.
Database 'Company' running the upgrade step from version 770 to version 771.
Database 'Company' running the upgrade step from version 771 to version 772.
Database 'Company' running the upgrade step from version 772 to version 773.
Database 'Company' running the upgrade step from version 773 to version 774.
Database 'Company' running the upgrade step from version 774 to version 775.
Database 'Company' running the upgrade step from version 775 to version 776.
Database 'Company' running the upgrade step from version 776 to version 777.
Database 'Company' running the upgrade step from version 777 to version 778.
Database 'Company' running the upgrade step from version 778 to version 779.
Database 'Company' running the upgrade step from version 779 to version 780.
Database 'Company' running the upgrade step from version 780 to version 781.
Database 'Company' running the upgrade step from version 781 to version 782.
RESTORE DATABASE successfully processed 283 pages in 0.022 seconds (100.430 MB/sec).

Up-level compatibility (or lack thereof…)

Versions of SQL Server cannot read databases upgraded to more recent versions of SQL Server – for instance, SQL Server 2012 cannot read a database that’s been upgraded to SQL Server 2014. This is because older versions do not have the code needed to interpret the upgraded structures and database layout.

Here’s an example of trying to restore a SQL Server 2014 database to a SQL Server 2012 server:

RESTORE DATABASE [Company2014]
FROM DISK = N'D:\SQLskills\Company2014_Full.bak'
WITH REPLACE;
GO
Msg 3169, Level 16, State 1, Line 51
The database was backed up on a server running version 12.00.4422. That version is incompatible with this server, which is running version 11.00.5343. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.
Msg 3013, Level 16, State 1, Line 51
RESTORE DATABASE is terminating abnormally.

In earlier versions, the messages weren’t always quite as nice and easy to understand.

And some people confuse database compatibility level with the database version. Compatibility level has nothing to do with up-level compatibility – it just changes how some query processor features behave.

Summary

The simple thing to bear in mind is not to attach the only copy of your database to a newer version. It’s always better to restore a copy of a database, then you’ve still got the original database to fall back on, for whatever reason. This applies even if you’re deliberately performing an upgrade – I’d still want to keep the older copy of the database around in case some problem occurs with the upgrade.

If you *have* attached your only copy of the database to a newer version and want to go back to an earlier version, your only option is to script out the database structure, create the database again on the older version, and then transfer all the data from the newer version to the older version. Very tedious.

Hope you found this helpful!

PS There’s a comment below asking whether you can move back to an earlier SP or CU. Yes, for user databases, as long as the newer SP/CU didn’t change the physical version number (and none of them since 2005 SP2 and 2008 SP2 have done that).

New course: Understanding and Using Azure SQL Database

And to coincide with our new training class on Azure, we’ve just published a new Pluralsight course on Azure too!

Tim’s latest Pluralsight course – SQL Server: Understanding and Using Azure SQL Database – and is just under two hours long and is based on his very popular user group and conference session.

The modules are:

  • Introduction
  • Understanding Azure SQL Database Features
  • Exploring Unsupported Features
  • Understanding Azure SQL Database Pricing
  • Migrating Data to Azure SQL Database
  • Monitoring and Tuning Azure SQL Database

Check it out here.

We now have 150 hours of SQLskills online training available (see all our 51 courses here), all for as little as $29/month through Pluralsight (including more than 5,000 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

New class: Immersion Event on Upgrading SQL Server

We have a second exciting new class debuting this October in Chicago: Immersion Event on Upgrading SQL Server.

It’s a 3-day class, taught by Glenn Berry and Tim Radney. We’ve seen a huge surge in clients upgrading from out-of-support versions of SQL Server, so this class will be really useful to many organizations.

The modules are as follows:

  • Upgrade Planning
  • Hardware and Storage Selection and Configuration
  • SQL Server 2016 Installation and Configuration
  • Upgrade Testing
  • Migration Planning
  • Migration Testing
  • Production Migration Methods

You can read a more detailed curriculum here and all the class registration and logistical details are here.

We hope to see you there!

New class: Immersion Event on Azure SQL Database and Azure VMs

We have a really cool new class debuting this October in Chicago: Immersion Event on Azure SQL Database and Azure VMs.

It’s a 2-day class, taught by Tim Radney. Azure is getting more and more popular, and we’re seeing many clients using it.

The modules are as follows:

  • Azure Virtual Machines
  • Migrating to Azure Virtual Machines
  • Azure SQL Database
  • Migrating to Azure SQL Database
  • Additional Azure Features

You can read a more detailed curriculum here and all the class registration and logistical details are here.

We hope to see you there!