SQLskills SQL101: Why do some wait types need to be ignored?

SQLskills has an ongoing initiative to blog about basic topics, which we’re calling SQL101. We’re all blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

Wait statistics analysis is one of my favorite things to talk about because it’s so incredibly useful for performance tuning and can dramatically shorten the time it takes to zero in on the root cause of a performance problem. But you have to do it correctly. You can’t just do a SELECT * FROM sys.dm_os_wait_stats. Various people have published scripts online to aggregate and display wait statistics in an actionable way, and my script is one of the most popular (latest version is always in this post).

One question I’m often asked is why does my script have a list of wait types that it specifically filters out? The answer is that those wait types are what I call ‘benign’ – they’re usually not a problem but happen frequently enough from regular SQL Server operations that they would show up as the top waits and so would obscure the waits that you can do something about.

For instance, if I take my waits script and remove all the filtering of benign waits, the results on my laptop where I’m forcing a tempdb contention problem are as follows:

WaitType                            Wait_S    Resource_S  Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
----------------------------------- --------- ----------  -------- --------- ---------- --------- -------- -------- -----------------------------------------------------------------------
SLEEP_TASK                          123335.21 123326.43   8.77     5232828   10.68      0.0236    0.0236   0.0000   https://www.sqlskills.com/help/waits/SLEEP_TASK
DIRTY_PAGE_POLL                     82215.60  82214.61    0.98     808502    7.12       0.1017    0.1017   0.0000   https://www.sqlskills.com/help/waits/DIRTY_PAGE_POLL
HADR_FILESTREAM_IOMGR_IOCOMPLETION  82215.08  82214.43    0.65     163809    7.12       0.5019    0.5019   0.0000   https://www.sqlskills.com/help/waits/HADR_FILESTREAM_IOMGR_IOCOMPLETION
LOGMGR_QUEUE                        82213.89  82210.58    3.31     669980    7.12       0.1227    0.1227   0.0000   https://www.sqlskills.com/help/waits/LOGMGR_QUEUE
SQLTRACE_INCREMENTAL_FLUSH_SLEEP    82212.97  82212.94    0.03     20546     7.12       4.0014    4.0014   0.0000   https://www.sqlskills.com/help/waits/SQLTRACE_INCREMENTAL_FLUSH_SLEEP
REQUEST_FOR_DEADLOCK_SEARCH         82212.74  0.00        82212.74 16442     7.12       5.0002    0.0000   5.0002   https://www.sqlskills.com/help/waits/REQUEST_FOR_DEADLOCK_SEARCH
LAZYWRITER_SLEEP                    82210.41  82209.82    0.59     86524     7.12       0.9501    0.9501   0.0000   https://www.sqlskills.com/help/waits/LAZYWRITER_SLEEP
CHECKPOINT_QUEUE                    82204.96  82204.92    0.04     125       7.12       657.6396  657.6394 0.0003   https://www.sqlskills.com/help/waits/CHECKPOINT_QUEUE
XE_TIMER_EVENT                      82204.08  0.00        82204.08 37409     7.12       2.1974    0.0000   2.1974   https://www.sqlskills.com/help/waits/XE_TIMER_EVENT
QDS_CLEANUP_STALE_QUERIES_TASK_
                   MAIN_LOOP_SLEEP  82201.37  82201.36    0.01     1371      7.12       59.9572   59.9572  0.0000   https://www.sqlskills.com/help/waits/QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP
QDS_PERSIST_TASK_MAIN_LOOP_SLEEP    82201.29  82201.28    0.01     1371      7.12       59.9572   59.9572  0.0000   https://www.sqlskills.com/help/waits/QDS_PERSIST_TASK_MAIN_LOOP_SLEEP
SP_SERVER_DIAGNOSTICS_SLEEP         82200.36  0.00        82200.36 299612    7.12       0.2744    0.0000   0.2744   https://www.sqlskills.com/help/waits/SP_SERVER_DIAGNOSTICS_SLEEP
XE_DISPATCHER_WAIT                  82198.10  82198.10    0.00     686       7.12       119.8223  119.8223 0.0000   https://www.sqlskills.com/help/waits/XE_DISPATCHER_WAIT

All of these are benign waits happening on system threads. For instance, you can see that the lazy writer thread on my instance is waking up every 1 second or so to check for memory pressure in the buffer pool, finding none, and then sleeping again (that’s the LAZYWRITER_SLEEP wait type, showing an average of 0.95s average resource wait time in the AvgRes_S column). You can also see that nearly all of these waits have a total wait time of around 82, 200 seconds, which is how long my laptop has been running since its last reboot.

The point is that these waits always occur and if you don’t filter them out, they will show up as the most prevalent wait types on your instance, and they’re not related to performance issues.

When I put the filters back in, and re-run the script, I get the following output:

WaitType                            Wait_S    Resource_S  Signal_S WaitCount Percentage AvgWait_S AvgRes_S AvgSig_S Help/Info URL
----------------------------------- --------- ----------  -------- --------- ---------- --------- -------- -------- -----------------------------------------------------------------------
PAGELATCH_UP                        3451.97   3312.34     139.63   502282    56.73      0.0069    0.0066   0.0003   https://www.sqlskills.com/help/waits/PAGELATCH_UP
PAGELATCH_SH                        2324.96   1449.37     875.60   2030686   38.21      0.0011    0.0007   0.0004   https://www.sqlskills.com/help/waits/PAGELATCH_SH
LATCH_EX                            217.89    214.96      2.94     7628      3.58       0.0286    0.0282   0.0004   https://www.sqlskills.com/help/waits/LATCH_EX

Now I can see an indication of a problem and know to go look at my sys.dm_os_waiting_tasks script (latest version always here) for further investigation (and note the automatically-generated URLs which will take you to the relevant page of my waits library for explanation of the wait types and troubleshooting advice).

Bottom line: always make sure you’re filtering out benign wait types so you’re not trying to troubleshoot a problem that you can’t do anything about.

Read committed doesn’t guarantee much…

A while back I was involved in an email thread where people were wondering about some ‘weird’ behavior from SQL Server. The problem was occurring on SQL Server 2016 and used the default isolation level of read committed. The scenario was as follows:

  • Create a table with a few columns
  • Batch 1: In one SSMS window, do the following (which takes 10 seconds to run):
    • Start a transaction
    • Insert 1,000 rows into the table, with a 0.01 second WAITFOR DELAY between each insert
    • Commit the transaction
  • Batch 2: In a second SSMS window:
    • Select * from the table

The ‘weird’ behavior is that when the “Batch 2” select completes, after having been blocked by the “Batch 1” transaction, it doesn’t return all 1,000 rows (even though “Batch 1” has completed). Furthermore, depending on when the “Batch 2” select is started, during the 10-seconds that “Batch 1” executes, “Batch 2” returns different numbers of rows. This behavior had also been reported on earlier versions of SQL Server as well. It’s easy to reproduce on SQL Server 2016/2017 and can be reproduced in all earlier versions with a single configuration change (more details in a moment).

Additionally, if the table has a clustered index created, 1,000 rows are returned every time, on all versions of SQL Server.

So why is this weird? Many people expect that all 1,000 rows will be returned every time AND that the structure of the table or the version of SQL Server should not make any difference.

Unfortunately, that assumption is not correct when using read committed. The read committed isolation level guarantees that only committed data will be read; it does NOT guarantee how much of the committed data will be returned and is prone to inconsistent analysis problems. If you want to avoid inconsistent analysis, you’d need to increase your isolation level (if using locking) or change to versioning for read committed (or even snapshot isolation).

However, I do agree that the assumption is reasonable, even though it’s not correct.

The definition of the requirement is straightforward however. SQL Server guarantees that only committed data will be read, not how much of it will be read, and so the amount of committed data returned in this case is variable. Furthermore, depending on the table structure, SQL Server version, and configuration option (which I’ve mentioned twice now, but haven’t explained as I don’t want to spoil my story…), the number of rows returned will vary wildly.

So, what’s going on? Why the discrepancy in the number of rows?

It’s because the table is a heap.

The ‘weird’ behavior manifests itself when the heap has an extent allocated to it immediately, from which the first data pages are allocated. When the allocation-order scanner for the select starts, it looks at the PFS bytes for the 8 pages in the extent to see which ones are allocated. These will be scanned. Depending on when the select starts within the 10 seconds that “batch 1” executes, there will be more rows or fewer rows read by the scanner (and returned in the “batch 2” results) because more or fewer of the PFS byte 0x40 bits will have been set indicating the page in the extent is allocated.

For a table to have an extent allocated to it immediately, mixed pages have to be disabled, which is the default for SQL Server 2016 onward, which is why the ‘weird’ behavior manifests with a small number of rows on SQL Server 2016. However, if you’ve enabled trace flag 1118, you’ll see this behavior in all versions. This is why some people report seeing the ‘weird’ behavior on versions earlier than SQL Server 2016.

When mixed extents are NOT disabled, i.e. in earlier versions than SQL Server 2016 when 1118 is not enabled, the first 8 pages allocated are mixed pages, and so the allocation order scanner has to pick them up individually and sees them all (without going into details of the synchronization around the single-page slot array on the first IAM page in the IAM chain/allocation unit for the table), and so as long as no more than 8 data pages are used by the insert, all the rows on them will be returned by the select in this example.

So there you have it, the behavior is “by design” and hasn’t changed at all, but is understandably confusing unless you know exactly how the Engine is performing the select under the covers.

And if you create a clustered index, the ‘weird’ behavior doesn’t occur for this example in any version. This is because the select gets hung up on the S(hare) row lock for the first row in the clustered index, and by the time the select is granted the S lock, the insert has completed and the index leaf scan picks up all 1,000 rows.

Don’t think that using a clustered index stops ‘weird’ behavior using read committed; it doesn’t. However, it can reduce some forms of inconsistencies. In this old blog post, I demonstrate a scenario where a scan of clustered index returns four rows when only three rows exist, using read committed. Kimberly also describes and shows “inconsistent analysis” in these blog posts:

So to summarize: the read committed isolation guarantees that only committed data is read; however, there are many forms of potential concerns (known as “inconsistent analysis”). It really pays to understand what these are and what the various isolation levels that SQL Server provides do and do not allow. In addition to the blog posts above, check out the MSDN lesson titled: Understanding the Available Transaction Isolation Levels here.

SQLskills SQL101: Why does repair invalidate replication subscriptions?

SQLskills has an ongoing initiative to blog about basic topics, which we’re calling SQL101. We’re all blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

Whenever I’m teaching and recovering from corruption, I always stress that if the REPAIR_ALLOW_DATA_LOSS option of DBCC CHECKDB (which I’ll just call ‘repair’ from now on) has to be used, you should do the following:

  • Run another DBCC CHECKDB to make sure that repair fixed everything and no new problems were exposed (by the repair fixing something that was blocking DBCC CHECKDB‘s access to a portion of the database)
  • Run DBCC CHECKCONSTRAINTS on any tables or indexes that were repaired and are involved in constraints (stated in Books Online too)
  • Reinitialize any replication subscriptions that are based on tables that were repaired (stated in Books Online too)
  • Reevaluate your HA/DR strategy so you don’t have to use repair again in future

One question I’m often asked is about why replication can be broken by running repair.

There are two ways that replication can be affected: repairs on replication metadata tables, and repairs on anything else to do with a subscription.

Repairs on replication metadata tables

This is the simplest case to explain. If the repair operation affects any of the replication metadata tables (i.e. deleted some data from them), the entire replication publication will be in an inconsistent state and you should remove replication completely from the database an reinitialize it. This isn’t limited to a single subscription – all replication should be reconfigured.

Repairs on anything else

Transaction replication captures changes to the publication database by analyzing the transaction log, looking for transactions that change data in any of the publications, and converting those operations into logical operations that can be applied to the subscribers. Merge replication captures changes to the publication database using DML triggers and converting those operations into logical operations that can be applied to the subscribers.

Neither of these mechanisms can capture of operations performed by repair. Repair operations are always direct physical changes to the database structures to fix inconsistencies in the structures (e.g. a database page, table record, or a linkage between two pages), as opposed to physical changes because of queries performing inserts, updates, or deletes on tables.

These repair operations cannot translated into logical operations that can be applied to replication subscribers because there are no logical operations than can be expressed using T-SQL for the equivalent of the direct structural changes that repair is performing. Replication does not preserve the exact physical location of a particular record between the publication and subscription databases, so a direct change to record Y on page X in the publication database would not be able to be replayed on the subscription database (remember, replication ships logical changes, not physical changes). This means that if any table that is part of a replication publication is changed by a repair operation, the replication subscription is no longer valid and must be reinitialized.

As an example, imagine a repair operation is forced to remove a data page from a unique clustered index (essentially deleting some table records), and the subscription is NOT reinitialized. Those records would still exist on the replicated copy of the table. If a subsequent insert operation inserted records with cluster key values corresponding to the records deleted by the repair operation, the Distribution Agent would fail when trying to apply the inserts to the replicated copy – because the repair operation was not applied to the subscription database and a duplicate key violation error will occur when attempting to apply the insert to the replicated table.

A replication subscription must always be reinitialized if any table in the publication is affected by a repair operation, or the replication metadata tables are repaired.

Thanks

New live online training class on using Query Store in May!

Continuing our series of live, online classes, Erin will be delivering her new IEQS: Immersion Event on Solving Common Performance Problems with Query Store in May! The class will be delivered live via WebEx on May 22-23 (roughly 12-13 hours of content including Q&As; about the same as two full workshop days!) and the recording will be available to attendees for six months following the end of the class.

Rather than have people try to watch a full day of training at their computer for one of more days, the class will run from 10am to 3pm PST each day, with two 90-minute teaching sessions, each followed by Q&A, and a lunch break. We chose to do this, and to spread the class over a few days, so the times work pretty well for those in the Americas, Africa, and Europe. We also realize that this is complex content, so want to give attendees time to digest each day’s material, plus extensive Q&A.

Here are some select quotes from prior attendees of Erin’s in-person classes:

  • “Extremely pleased with the course. FAR exceeded my expectations.”
  • “Well worth the time and expense to attend. Would highly recommend this to others.”
  • “I though this was extremely worthwhile. I have been a DBA for a few years and this was a great refresher for a lot of things I don’t do very often but should.”
  • “Found class extremely helpful and plan to take the next one.”
  • “Great course – very informative – very great instructors – I am sure to be back!”
  • “I was familiar with all of the topics in this course, but barely understood most. I feel I will be able to begin to do all of the things I need to take simpler tasks from our DBAs to allow them to focus on the more advanced tasks. That was exactly my goal for taking this class.”
  • “Great course. Good new info for me, plus refresher on other info. Thanks!”
  • “Both Erin and Jon have a vast knowledge of not only SQL Server & tools, but also effective presentation.”
  • “Thanks for taking the time to better my knowledge of SQL and allow me to better my career.”

The modules covered will be:

  • Query Store Fundamentals
  • Understanding the Query Store Data
  • Workload Characteristics and Query Store Performance
  • Finding Performance Issues
  • Forcing Plans
  • Automatic Tuning
  • Other Uses of Query Store
  • Visualizing Query Store Data

The price of the class is US$795 (or US$695 for prior live, online attendees) and you can get all the details here.

The class was also announced in our newsletter today, with a US$100 discount for those people who received that newsletter, valid for two weeks. All future live, online classes will always feature a two-week discount for newsletter subscribers.

We decided to start teaching some live, online classes as we recognize that not everyone can travel to our in-person classes, or take that time away from work or family, or simply have travel budget as well as training budget. People also have different ways they learn, some preferring in-person training, some preferring recorded, online training, and some preferring live, online training.

We’ll be doing more of these in 2018, on a variety of topics, so stay tuned for updates (and discounts through the newsletter).

We hope you can join us!

SQLskills SQL101: Is the recovery interval guaranteed?

SQLskills has an ongoing initiative to blog about basic topics, which we’re calling SQL101. We’re all blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 blog posts, check out SQLskills.com/help/SQL101.

One of the concepts I find people misunderstand frequently is the recovery interval, either for the server as a whole or the per-database setting that was introduced in SQL Server 2012 for indirect checkpoints.

There are two misconceptions here:

  1. The recovery interval equals how often a checkpoint will occur
  2. SQL Server guarantees the recovery interval (i.e. crash recovery for the database will only take the amount of time specified in the recovery interval)

A bit of background: crash recovery has two tasks to perform: replaying log records from committed transactions (called REDO) and removing the effect of log records from uncommitted transactions (called UNDO). REDO only has to occur if there have been committed transactions where the changed data pages have not been written to disk (which is done by periodic checkpoints or a lazy writer background thread if there’s memory pressure on the buffer pool).

The recovery interval specifies an ideal upper bound on how long the REDO portion of crash recovery will take. The length of time REDO takes depends on how many log records need to be replayed from committed transactions. To help REDO stay on track, the recovery interval setting forces a checkpoint to occur when a certain number of log records have been generated (a number calculated using the chosen recovery interval).

So although recovery interval *does* control how often a checkpoint occurs, the recovery interval time does not equal the time between checkpoints.

For instance, if the recovery interval is set to one minute (the default), and a checkpoint occurs *now*, when will the next checkpoint occur? If activity in the database is very infrequent, a checkpoint may not occur for a very long time because the calculated recovery time for that activity will be less than one minute. If activity in the database is extremely high, checkpoints may occur every few seconds. Checkpoint frequency entirely depends on how fast log records are being generated *and* the recovery interval setting.

And of course SQL Server cannot *guarantee* the recovery interval. Imagine the recovery interval is set to one minute, and I start a transaction that updates millions of records over several hours. If SQL Server crashes just before the transaction commits, how long do you think it will take to roll back the millions of updates? Certainly it will take much longer than one minute – the configured recovery interval.

So I can update my definition to be: the recovery interval is therefore the ideal upper bound on how long the REDO portion of crash recovery will take, assuming no long-running transactions.

Hope this helps clear up any misconceptions.

No Fall U.S. classes in 2018

We’ve been finalizing our schedule for the remainder of 2018 and wanted to let everyone know that we will not have any Immersion Events in the U.S. this Fall. While we typically have several weeks of in-person classes in Chicago in October or November, we have several events and engagements already scheduled, including multiple Immersion Events in London.

The only in-person Immersion Events in the U.S. this year will be in Chicago in April/May and Bellevue, WA in June.

This means that each of our in-person Immersion Events will only be offered once in the U.S. this year (with an extra IEPTO1 in June in Bellevue). These classes are all filling up quickly, so please don’t wait to the last minute to register as you might miss out. If you haven’t checked our event list in a while, here’s what we’re offering:

  • IEPTO1: 5 days, Performance Tuning and Optimization, Part 1
  • IEPTO2: 5 days, Performance Tuning and Optimization, Part 2
  • IE0: 3 days, Junior/Accidental DBA
  • IEUpgrade: 3 days, Upgrading and New Features
  • IEAzure: 2 days, Azure SQL Database and Azure VMs
  • IECAG: 2 days, Clustering and Availability Groups
  • IEPS: 3 days, PowerShell for the SQL Server DBA
  • IEBIStrat: 3 days, Developing a BI and Analytics Strategy
  • IEPML: 5 days, Practical Machine Learning, AI, and Data Science
  • IESSIS1: 5 days, Learning SQL Server Integration Services

You can see the April/May/June U.S. class schedule here.

Thanks

PS Rest assured, there are no changes to our schedule for London in September (our two performance tuning classes, plus Azure and clustering/AGs) – we’re looking forward to seeing everyone!

New Pluralsight course: Building Multi-instance Asynchronous Applications

Jonathan’s latest Pluralsight course has been published – SQL Server: Building Multi-instance Asynchronous Applications. It’s 2.5 hours long, and from the course description:

The Service Broker feature of SQL Server is transforming the way developers think about processing requests within the database and open new opportunities for scalable applications. This course builds on the basic concepts introduced in the SQL Server: Building Simple Asynchronous Applications course by introducing single-instance multi-database and multi-instance configurations for Service Broker. In this course, SQL Server: Building Multi-instance Asynchronous Applications, you’ll learn how to configure security for Service Broker operations across databases as well as the networking and routing configurations for secure communications between multiple instances of SQL Server. Next, you’ll explore bidirectional conversations between services to allow for status updates and the appropriate way of ending a conversation between two services. Finally, you’ll see how to maintain and troubleshoot Service Broker configurations using the SSBDiagnose tool to validate that all of the required components have been appropriately configured, using the Dynamic Management Views for checking message statuses, and using the events in SQL Trace and Extended Events for troubleshooting Service Broker problems. When you’ve finished this course, you’ll have the skills and knowledge to start implementing multi-database and multi-instance Service Broker configurations.

The modules are:

  • Introduction
  • Intra-instance Service Broker Configurations
  • Multi-instance Service Broker Configurations
  • Maintaining Service Broker Configurations
  • Troubleshooting Service Broker Configurations

Check it out here.

We now have more than 175 hours of SQLskills online training available (see all our 60 courses here), all for as little as $29/month through Pluralsight (including more than 6,000 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

New Pluralsight course: Understanding, Configuring, and Troubleshooting Database Mirroring

Glenn’s latest Pluralsight course has been published – SQL Server: Understanding, Configuring and Troubleshooting Database Mirroring. It’s 2.5 hours long, and from the course description:

SQL Server database mirroring is a very useful technology for high availability and disaster recovery, as well as for general purpose database movement, migration, and upgrades. In this course, SQL Server: Understanding, Configuring and Troubleshooting Database Mirroring, you’ll first learn the basics of database mirroring, its capabilities, and its limitations. Next, you’ll explore how to prepare an environment for database mirroring, and then how to configure database mirroring using either SQL Server Management Studio or T-SQL scripts. Then, you’ll learn how to monitor a database mirroring environment and troubleshoot common problems. Finally, you’ll discover how to use database mirroring for data migration and upgrading a database to a new version of SQL Server. When you’ve finished this course, you’ll have the skills and knowledge to start configuring, using and troubleshooting database mirroring.

The modules are:

  • Introduction
  • Database Mirroring Overview
  • Preparing for Database Mirroring
  • Configuring Database Mirroring
  • Monitoring and Troubleshooting Database Mirroring
  • Migrating Data Using Database Mirroring

Check it out here.

We now have more than 175 hours of SQLskills online training available (see all our 60 courses here), all for as little as $29/month through Pluralsight (including more than 6,000 other developer and IT training courses). That’s unbeatable value that you can’t afford to ignore.

Enjoy!

Setting mentoring ground rules

One of the things I always recommend in the first newsletter of each year is to try to get a mentor. On the flip side of that is that one of the most rewarding things you can do in your professional life is to *be* a mentor to one or more people. I mentored more than 50 people back in 2015, and I think I’m going to do some more mentoring, starting in the Spring (watch the blog for details – but definitely not 50 again!).

If you’re going to be a mentor, I think it’s very important to set ground rules. One of my mentees from 2015 just sent me email saying he’s going to be mentoring this year (*so* cool to see people paying it forward!) and asking about the ground rules I set. I’ve been mentoring people for almost 20 years and I think you need to cover at least:

  • How communication will work, and expectations on both sides for how long a response may take
  • How to end the relationship
  • Which areas are fair game, and which aren’t (e.g. I wouldn’t answer SQL Server questions with the 2015 group)
  • Statement of confidentiality and trust
  • What actions will cause the relationship to end
  • How not to react to things I’ll say (e.g. I’m very blunt and honest – the whole point of a mentoring relationship is clear feedback)

That way there’s no ambiguity and you both know how things are going to work.

As an example, here’s the set of rules I sent to all my mentees to ‘ok’ before I’d move forward with them:

Lots of housekeeping, ground rules, and expectation setting in this email so PLEASE read through this email and let me know whether you’re comfortable with everything I say below. If not, for any reason, just say so (you don’t need to say why) and we’ll walk away with no hard feelings. I also need to know that you’re getting my emails. Once I hear back from you, I’ll send the kick-off part 2 email.

As I mentioned in the original blog post, I do all mentoring over email. I have two reasons for that:

It makes the whole process asynchronous, which means there’s no immediate pressure to respond and so is easier on us all. It also means we can think about responses for a week or two, and that’s especially useful as I’ve got several long trips without Internet this year.

I really don’t like talking on the phone. Phones are a necessary evil, but they don’t make for well-though-out discussions.

Once I’ve had a response from you so I know my email’s getting through, whenever I send an email out, I won’t chase you up to reply to it. If you go dark for, say, a month, I’ll assume you’re no longer interested or you’re done with mentoring. I’d appreciate a note to that effect if you decide that though, so we get a positive closure.

You can expect responses from me within a week or two. If a longer response will be delayed, I’ll let you know. If it goes more than two weeks and you haven’t heard from me about a reply, please ping me as your email might have been lost or (rarely) I might have accidentally deleted or mis-filed it.

I take all mentoring relationships seriously, and I expect you to as well. You’re going to be opening up and telling me things about yourself that you might not share/have shared with anyone else (such as hopes, failures, negatives, positives, weaknesses, strengths, work problems, family problems, whatever). Anything you tell me will remain strictly between us, and only us.

You can publicize whatever you want about the mentoring relationship and any advice I give you, unless I (rarely) tell you something is in confidence.

Do not tell me anything that is illegal or unethical (e.g. you cheated on a certification) – that will cause me to end the mentoring relationship immediately. No second chances on this. If there’s something we’re discussing that is making me uncomfortable, I’ll let you know.

I will not discuss politics or religion 1-1 through email. Sheep, diving, and electronics are fair game though :-)

We’re going to get to know each other better over the rest of this year. Feel free to Facebook friend me if you want – no pressure – I don’t care either way, but some people don’t like to presume to ask.

I will be blunt and honest with you; I don’t sugar-coat things. You’ve probably already seen that from me on Twitter and on my blog. Getting straight to the point in any discussion is the easiest way to get a point across clearly and with little or no ambiguity. Please don’t ever take anything I say to you as trying to be nasty, belittling, or destructive criticism – that’s not how I work or who I am. The whole point of a mentoring relationship is to be constructive, so please take everything I say with that in mind. If you find yourself having a hard time with that, let me know and we can work on that too. And feel free to disagree with anything I say too, you won’t offend me.

As I mentioned in the original post, this isn’t about me answering SQL Server questions for you. That’s what forums, Twitter, etc are for. Although I can help you figure out how to grow skills, this isn’t a mostly technical conversation.

This may seem like a lot of rules, but I’m very easy going. At the same time I like to have expectations set and ground rules agreed to so we all know where we stand – this is a big time investment from all of us, so it pays to get everything out in the open right at the start.

If that all sounds cool to you, let me know and I’ll send the second part of the kick off.

Thanks!

Feel free to use any or all of this if you’re going to be a mentor, or ask for something like this if you’re going to be a mentee.

The main thing to bear in mind is that both mentoring and being a mentee should be rewarding, and not feel like a chore or a slog. Setting some rules up front will help define the relationship and provide a clear way to end it if things aren’t going well.

Enjoy!

More live online training from Kimberly in March!

Due to the popularity of Kimberly’s delivery of her new IEVLT: Immersion Event on Very Large Tables: Optimizing Performance and Availability through Partitioning in January, and the fact that many of you told us that you didn’t have a chance to attend, we’re running it again in March! The class will be delivered live via WebEx on March 20-22 (roughly 12-13 hours of content plus open Q&As; about the same as two full workshop days!) and the recording is available to attendees for six months following the end of the class. Note: we’re not planning any more deliveries of IEVLT until 2019.

Here are some quotes from people who attended the January class:

  • “Kimberly is incredibly knowledgeable and was able to adapt the techniques to all the different scenarios presented to her.”
  • “The best educator I’ve ever seen. She makes complex concepts “magically” easy to grasp. Incred-amazing.”
  • “Great course. I could hear clearly, the content was relevant to current day problems, and provided clear instruction.”
  • “This was REALLY good. Getting to an IE is tough there are only a few a year and more importantly because there are only a few they cover a pretty broad range of information. Since I do mainly database design and query tuning, I can’t justify much beyond IE1 as I don’t do day to day DBA work. Seeing you were offering the online event focused on a specific topic – very large tables – was PERFECT. I know I really need to improve my knowledge of the options in that area. I recalled the PV/PT architecture from IE1 and knew a refresher, coupled with the new information would be perfect. The cost was BEYOND reasonable. The time frame, at only about 1/2 a day, was easy to justify and easy to manage keeping up with regular work while I did it. So this worked out to be a perfect event.” – Todd Everett, USA
  • “SLIGHTLY prefer the classroom option, but attendance to a location can be very hard to manage and therefore WebEx is a VERY, VERY good alternative. I didn’t think it would be, but I’m definitely sold.” – Kevin Urquhart, UK
  • “I’m really glad I decided to participate in this event. I learnt a lot and I already see lots of opportunities to implement some of the strategies at my workplace (after working through all the materials of course :P).” – Ernst Kruger, South Africa
  • “Loved the online aspect. It felt like I was there with the question ability and having the questions just answered right there. I felt I had a voice and could ask anything and the ability to watch it later made it totally worth the registration.”
  • “Always love to sit at the feet of Kimberly. Always learn a ton.”
  • “I struggled with WebEx at first… felt a little like watching a video… BUT the moment it became interactive (which was very early on), that whole perception changed. By the end I REALLY enjoyed it as a method.”
  • “I really enjoyed the ability to ask questions as the course went along so that I didn’t forget what I wanted to ask while you were teaching. This allowed for questions to come through and class to continue until a good stopping point to answer the questions. Plus having the questions written from other attendees was nice for future reference instead of trying to remember from an in-person class discussion.”
  • “Kimberly, I have to tell you that you are hands down the finest educator I have ever met. Your dynamic personality, combined with your depth of knowledge, and passion to fully understand the subject matter are unmatched, in my experience. I feel truly privileged to have had the opportunity to learn from you again and again. You really DO make learning fun. Thank you!” – Mike Petri, USA
  • “I learned a lot of new topics and found this class really useful. Usually when I attend a class or an in person event I learn new things but no where near this class. Thanks for a great class.”

Rather than have people try to watch a full day of training at their computer for one of more days, the class will run from 10am to 3pm PST each day, with two 90-minute teaching sessions, each followed by Q&A, and a lunch break. We chose to do this, and to spread the class over a few days, so the times work well for those in the Americas, Africa, and Europe. We also realize that this is complex, architectural content, so want to give attendees time to digest each day’s material, plus extensive Q&A.

The modules covered will be:

  • Horizontal Partitioning Strategies
  • Partitioned Views
  • Partitioned Tables
  • Tables, Indexes, Keys, and Relationships
  • Implementing the Sliding Window Scenario
  • Key Partitioning Concerns and Considerations
  • Partitioning Techniques Combined
  • Review

The price of the class is US$895 and you can get all the details here.

The class was also announced in our newsletter this morning, with a $100 discount for those people who received that newsletter, valid for two weeks. All future live, online classes will always feature a two-week discount for newsletter subscribers.

We decided to start teaching some live, online classes as we recognize that not everyone can travel to our in-person classes, or take that time away from work or family, or simply have travel budget as well as training budget. People also have different ways they learn, some preferring in-person training, some preferring recorded, online training, and some preferring live, online training.

We’ll be doing more of these in 2018, on a variety of topics, so stay tuned for updates (and discounts through the newsletter).

We hope you can join us!