SQLskills – Meet Our Amazing Team

It’s funny… As I began this year, I realized how much I enjoyed scrolling through photos on Instagram. If you’re not on it, the beauty of it is that it’s just photos. No statuses, no politics, no links – just photos. I’m following some amazing photographers – both above and below water – as well as friends and family. And, I thought – our team is a blast; we work hard, we play hard. Why not setup an Instagram account for SQLskills and highlight some of the fun we have!

So, we did (SQLskills on Instagram) – we haven’t posted a lot yet but it will be really fun as we start to travel and present at various upcoming events:

What was also super exciting for me to realize is that I’ve been running SQLskills for 25 years – this year. As a result, we’re also going to celebrate some of the many events we’ve presented at and met so many of you at! It’s crazy to think that I’ve been running SQLskills for 25 years (and the last 13 with Paul) but I / we still really enjoy what we’re doing and we look forward to many more years! (and, I started when I was only 2 so I’m really only 27… Paul wishes. ;-))

What I thought would be fun is to tell you why we all enjoy what we do so much! And, there’s even a SentryOne session coming up where we all told horror stories and the lessons we learned. This first one is recorded (Feb 12) but we’re doing an open style Q&A on Feb 19 so you can join us and ask your questions too!

Wednesday, February 12, 2020 11:00 AM EST: Troubleshooting Horror Stories and the Surprising Lessons They Teach – Part 1 (register)

Wednesday, February 19, 2020 11:00 AM EST: Troubleshooting Horror Stories and the Surprising Lessons They Teach – Part 2 (register)

 

And, I thought it’d be fun for you to hear what keeps the rest of the team going and excited – both professionally and personally!

Meet Our AWESOME SQLskills Team


Kimberly L. Tripp

Kimberly L. Tripp
SQLskills founder – started in 1995

I’ve always thought that data is a company’s most precious asset. I had skeptics – when I first started this business – that told me there was no way I’d survive “only” focusing on data. I’d have to learn .Net, client-side coding, servers, etc. And, even earlier on in my career, I did work with OS/2, Lan Manager, SQL Server, and Windows (plus, Windows for Workgroups – which I affectively called Windows for Warehouses since that’s really where most of that software stayed 😉). I was more of a “jack of all trades” earlier on but I really enjoyed focusing more on SQL Server. As of roughly ’92, I’ve been predominantly working only with SQL Server. And, in having a team, we’ve all been able to focus more. For me, my favorite topics and features all revolve around internals and “very large table” design. Specifically, I focus on partitioning and designing for performance / scalability. With an effective design, you won’t run into secondary problems with maintenance / availability and you’ll get A LOT more with the hardware that you have instead of constantly trying to kill your problem with iron. It’s really satisfying for me to work with folks that have had terrible problems that we’re able to solve with better design, effective indexing / statistics, and plan quality with proper caching.

And, if you’re at any of the following events, please be sure to stop by and say hi!

SQLBits in London: March 31 – April 4, 2020

Training Day, Wednesday, April 1, 2020: Statistics for Performance: Internals, Analysis, Problem Solving

As well as at these SQLbits conference sessions:

SQLintersection in Orlando, FL: April 5-10, 2020

Pre-conference workshop, Monday, April 6, 2020: The Developer’s Guide to Consistency, Accuracy, Concurrency, and Transactional Control

Plus, a few sessions at the conference!

SQLskills Immersion Events In Chicago: April 27 – May 8, 2020

I’ll be teaching part of IEPTO1 and IEPTO2 at our own SQLskills deep, technical training events (“Immersion” events) in Chicago:

As for my “fun” side, check out my “Meet the Team” photo posts that highlight some of my favorite non-SQL Server passions…


Paul S. Randal

Paul S. Randal
SQLskills CEO – joined SQLskills in 2007

One of the things I like most is teaching people about the internals of SQL Server, from all the knowledge I picked up working on the SQL Server Development Team at Microsoft for nine years. Many people think learning internals is irrelevant, but understanding *why* something works the way it does and how that impacts design and operational choices is really important. In addition to knowing how/why something works, knowing where to start – is equally important. Knowing where your server is “waiting” will give you direction. As a result, over the last few years, I’ve made wait statistics my primary focus. This endeavor has led me to create the hugely popular Wait Statistics and Latch Classes library. There’s nothing more rewarding than seeing a class student or consulting client have that ‘aha!’ moment when it all clicks!

SQLBits in London: March 31 – April 4, 2020

Training Day, Tuesday, March 31, 2020: Performance Troubleshooting using Waits and Latches

As well as at SQLbits conference sessions!

SQLintersection in Orlando, FL: April 5-10, 2020

Pre-conference workshop, Monday, April 6, 2020: Performance Troubleshooting using Waits and Latches

As well as these sessions:

SQLskills Immersion Events In Chicago: April 27 – May 8, 2020

Of course, Paul will be joining the entire SQLskills team for Immersion Events in Chicago. Join Paul at one of these deep, technical courses:

As for Paul’s “fun” side, check out his “Meet the Team” photo posts that highlight some of his favorite non-SQL Server passions:


Jonathan Kehayias

Jonathan Kehayias
SQLskills Principal Consultant – joined SQLskills in 2011

My primary focus is on HA/DR solutions for SQL Server as well as performance tuning and optimization. But, because I also love tackling complex multi-server problems and difficult automation problems I do a lot with Service Broker and Replication; I get the really fun engagements! I enjoy finding alternate solutions to problems using the wide arsenal of tools available in my toolkit, and often jokingly refer to myself as a multi-faceted threat since I have both a developer and admin background with SQL Server. I am not beyond taking a problem entirely outside of SQL Server and offering a non-SQL related solution that meets a client’s requirements or performs faster. Dealing with the insanely obscure problems and finding solutions is one of the things that brings me the most satisfaction and, at times, the most frustration about working with technology.

SQLintersection in Orlando, FL: April 5-10, 2020

Pre-conference workshop, Sunday, April 5, 2020: A Lap Around SQL Server for Linux Folks

Plus, a few sessions at the conference!

SQLskills Immersion Events In Chicago: April 27 – May 8, 2020

And, not surprisingly, Jon will be joining the entire SQLskills team for Immersion Events in Chicago. Join Jon at one of these deep, technical courses:

As for Jon’s “fun” side, check out his “Meet the Team” photo posts that highlight some of his favorite non-SQL Server passions:


Erin Stellato

Erin Stellato
SQLskills Principal Consultant – joined SQLskills in 2012

I like to know how things work.  Within SQL Server, this means I find the internals fascinating, and I like to dig into features that I can get my hands on to try stuff to see what works and what breaks.  When I worked for a software vendor, I dealt mainly with troubleshooting performance issues and corruption.  The performance issues helped develop my foundation of knowledge around internals, statistics, indexes, and execution plans, and all the corruption drove home the importance of good HA and DR strategies.  These are things I still love to this day, along with features like Query Store and Extended Events.  The more and more troubleshooting and tuning I’ve done, the more I’ve learned about wait statistics, plan caching, and parameter sensitivity…and the more I try new features such as Columnstore and In-Memory OLTP.  I am an engine girl at heart.  Give me a good performance issue any day and I’m happy.  I like solving problems, and I really enjoy teaching as I go.  Ultimately, my favorite customer engagement is when I’ve fixed their issue and they have learned something new about SQL Server.

SQLBits in London: March 31 – April 4, 2020

Training Day, Wednesday, April 1, 2020: Performance Tuning with Query Store in SQL Server and Azure

As well as these sessions:

SQLintersection in Orlando, FL: April 5-10, 2020

Join Erin for sessions on query store, troubleshooting workloads, and analyzing execution plans!

SQLskills Immersion Events In Chicago: April 27 – May 8, 2020

Of course, Erin will be joining the entire SQLskills team for Immersion Events in Chicago. Join Erin at one of these deep, technical courses:

As for Erin’s “fun” side, check out her “Meet the Team” photo posts that highlight some of her favorite non-SQL Server passions:


Tim Radney

Tim Radney
SQLskills Principal Consultant – joined SQLskills in 2015

During the housing crisis of the mid 2000’s, I was working for a financial institution as a jack of all trades. My title was Sr Application Engineer, but my job role was supporting dozens of vendor applications. Many of those had a SQL Server backend so I regularly worked around SQL Server. In 2008 when things were looking really bad for small and mid-size US banks, I was asked to join the DBA team. I loved working with SQL Server and data in general. I looked at job sites for application support type jobs and things were bleak, I then searched DBA jobs and there were thousands. That was enough for me to make a career change. A few months into the job I learned of the SQL Community and my career took off. Working with SQL Server has fundamentally changed my career, and life. Immediately, I started focusing on backups, baselines, standard configurations for all SQL Server builds, ETL, and performance. I moved up from a level 2 to Sr DBA, then lead DBA, to Managing the production DBA team. During that time at the bank, I lead the effort to virtualize 95+ percent of our SQL Server infrastructure and upgrade to modern versions of SQL Server. In late 2014, I was picked up by Paul in a bar in Seattle (true story) and joined the SQLskills team in January 2015. I continue to focus on data and have taken serious interest and focus in Azure and PowerBI. Working with customers to determine where to best store different types of data and how to best utilize/analyze is really satisfying!

SQLintersection in Orlando, FL: April 5-10, 2020

Pre-conference workshop, Monday, April 6, 2020: Management, Admin, and Best Practices for the Hybrid DBA (SQL Server 2016-2019 | Azure SQL DB | Managed Instance), and co-presented with David Pless, Microsoft

Post-conference workshop, Friday, April 10, 2020: Performance Tuning and Optimization for Modern Workloads (SQL Server 2016-2019 | Azure SQL DB | Managed Instance), and co-presented with David Pless, Microsoft

And, of course, a few sessions at the conference!

SQLskills Immersion Events

After that, Tim will absolutely be joining the entire SQLskills team for Immersion Events in Chicago as he teaches a couple of courses on his own! Join Tim at one of these deep, technical courses:

As for Tim’s “fun” side, check out his “Meet the Team” photo posts that highlight some of his favorite non-SQL Server passions:


THANKS!

So, be sure to click on a few of those links. No, I don’t get paid for clicks but wow – it takes a long time to add all those links to a post. I just hope someone uses one or two of them! ;-) And, make sure you stop by and say hi to anyone and ALL of our team; we all enjoy talking SQL Server, or diving (well, except for Erin but she loves talking about running / spinning / baking…). We all have our hobbies!

Finally, a HUGE WOW, I didn’t quite realize exactly how busy we all are over the next couple of months! This doesn’t include user group sessions or SQL Saturdays either and I know Erin just attended/presented at a really successful SQLSaturday in Cleveland and I presented to the Ireland SQL Server User group just last night! If you’re interesting in having one of us present for your user group – check out this post: Calling all user group leaders! We want to present for you in 2020!.

We really do hope to see you SOMEWHERE soon!!

Thanks for reading!
-k

 

SQLskills SQL101: Why are Statistics so Important?

In my years working with SQL Server, I’ve found there are a few topics that are often ignored. Ignored because people fear them; they think they’re harder than they actually are OR they think they’re not important. Sometimes they even think – I don’t need to know that because SQL Server “does it for me.” I’ve heard this about indexes. I’ve heard this about statistics.

So, let’s talk about why statistics are so important and why a minimal amount of knowledge of their importance can make a HUGE difference to your query’s performance.

There are SOME aspects to statistics that ARE handled automatically:

  • SQL Server automatically creates statistics for indexes
  • SQL Server automatically creates statistics for columns – when SQL Server NEEDS more information to optimize your query
    • IMPORTANT: This only occurs when the database option: auto_create_statistics is ON; this option is on by default in ALL versions/editions of SQL Server. However, there are some products/guidance out there that recommends that you turn this off. I completely DISAGREE with this.
  • SQL Server automatically updates the statistical information kept for indexes/columns for a statistic that’s been “invalidated” – when SQL Server NEEDS it to optimize your query

However, it’s important to know that while some aspects of statistics ARE handled automatically; they’re not perfect. And, SQL Server often needs some with certain data patterns and data sets – to really optimize effectively. To understand this, I want to talk a bit about accessing data and the process of optimization.

Accessing Data

When you submit a request to SQL Server to access data, you usually write a Transact-SQL statement – in the form of an actual SELECT or possibly execute a stored procedure (and, yes, there are other options). However, the main point is that you’re requesting a data set – not HOW that data set should be found/processed. So, how does SQL Server “get” to the data?

Processing Data

Statistics help SQL Server with the gathering / processing of the data. They help by giving SQL Server insight into how much data will be accessed. If a small amount of data is going to be accessed then the processing might be easier (and require a different process) than if the query were to process millions of rows.

Specifically, SQL Server uses a cost-based optimizer. There are other options for optimization but cost-based optimizers tend to be the most commonly used today. Why? Cost-based optimizers use specific information about the data – to come up with more effective, more optimal, and more focused plans – specific to the data. Generally, this process is ideal. However, with
plans that are saved for subsequent executions (cached plans), it might not always be ideal. However, most other options for optimization can suffer from even worse problems.

IMPORTANT: I’m not talking about cached plans here… I’m talking about the initial process of optimization and how SQL Server determines how much data will be accessed. The subsequent
execution of a cached plan brings additional potential problems with it (known as parameter sniffing problems or parameter-sensitive plan problems). I talk a lot about that in other posts.

To explain cost-based optimization, let me cover other possible forms of optimization; this will help to explain a lot of the benefits of cost-based optimization.

Syntax-based Optimization

SQL Server could use just your syntax to process your query. Instead of taking time to determine the best possible processing order for your tables, an optimizer could just join your tables in the order they’re listed in your FROM clause. While it might be fast to START this process; the actual gathering of the data might not be ideal. In general, joining larger tables to smaller tables is a lot less optimal than joining smaller tables to larger. To give you a quick view of this, check out these two statements:

USE [WideWorldImporters];
GO

SET STATISTICS IO ON;
GO

SELECT [so]., [li].
FROM [sales].[Orders] AS [so]
JOIN [sales].[OrderLines] AS [li]
ON [so].[OrderID] = [li].[OrderID]
WHERE [so].[CustomerID] = 832 AND [so].[SalespersonPersonID] = 2
OPTION (FORCE ORDER);
GO

SELECT [so]., [li].
FROM [sales].[OrderLines] AS [li]
JOIN [sales].[Orders] AS [so]
ON [so].[OrderID] = [li].[OrderID]
WHERE [so].[CustomerID] = 832 AND [so].[SalespersonPersonID] = 2
OPTION (FORCE ORDER);
GO

Review the cost of the two plans – side-by-side, you can see that the second is more expensive.

Costs of the same query with FORCE ORDER and only the two tables in the FROM clause reversed between the 1st and 2nd execution

Yes, this is an oversimplification of join optimization techniques but the point is that it’s unlikely that the order you’ve listed your tables in the FROM clause is the most optimal. The good news though – if you’re having performance problems, you CAN force syntax-based optimization techniques such joining in the order as your tables are listed (as in the example). And, there are quite a few other possible “hints” that you can use:

  • QUERY hints (forcing level of parallelism [MAXDOP], forcing SQL Server to optimize for the FIRST row – not the overall set with FAST n, etc…)
  • TABLE hints (forcing an index to be used [INDEX], forcing seeks, forcing an index join [index intersection])
  • JOIN hints (LOOP / MERGE / HASH)

Having said that, these should ALWAYS be a last resort IMO. I would try quite a few other things before I’d FORCE anything in production code. By forcing one (or, more) of these hints, you don’t allow SQL Server to further optimize your queries as you add/update indexes, as you add/update statistics, or as you update/upgrade SQL Server. Unless, of course, you’ve documented these hints and you test to make sure they’re still the best option for your query after maintenance routines, routinely after data modifications, and after updates/upgrades to SQL Server. To be honest, I don’t see this done as often as it should be. Most hints are added as if they’re perfect and left until later problems come up – even when they’re possibly floundering inefficiently for days, weeks, months, etc…

Understand statistics and stop letting your queries flounder!
(sorry, I had to… it was my photo post for Sunday – so perfect!)

It’s always important to see if there’s something ELSE that you can do to get better performance BEFORE FORCING a query to do ONE THING. Yes, forcing might be easy NOW but a small amount of time checking some of these other things might be well worth it in the long run.

For plan problems that are specifically from THIS query and THESE values (not from the execution of a cached plan), some things I might try are:

  • Are the statistics accurate / up-to-date? Does updating the statistics fix the issue?
  • Are the statistics based on a sampling? Does FULLSCAN fix the issue?
  • Can you re-write the query and get a better plan?
    • Are there any search arguments that aren’t well-formed? Columns should always be isolated to one side of an expression…
      • WRITE: MonthlySalary > expression / 12
      • NOT: MonthlySalary * 12 > expression
    • Are there any transitivity issues you can help SQL Server with? This is an interesting one and becomes more problematic across more and more tables. However, adding a seemingly redundant condition can help the optimizer to do something it hadn’t considered with the original query (see, the added condition below):

FROM table2 AS t1
JOIN table2 AS t2
               ON t1.colX = t2.colX
WHERE t1.colX = 12 AND t2.colX = 12

  • Sometimes just changing from a join to a subquery or a subquery to a join – fixes the issue (no, one isn’t specifically/always better than the other but sometimes the re-write can help the optimizer to see something it didn’t see with the original version of the query)
  • Sometimes using derived tables (a join of specific tables in the FROM clause AS J1) can help the optimizer to more optimally join tables
  • Are there any OR conditions? Can you re-write to UNION or UNION ALL to get the same result (this is the most important) AND get a better plan? Be careful though, these are semantically NOT the same query. You’ll need to understand the differences between all of these before you move to a different query.
    • OR removes duplicate rows (based on the row ID)
    • UNION removes duplicates based on JUST the SELECT LIST
    • UNION ALL concatenates sets (which can be A LOT faster than having to remove duplicates) but, this might / might not be an issue:
      • Sometimes there are NO duplicates (KNOW your data)
      • Or, sometimes it might be OK to return duplicates (KNOW your user/audience/application)

This is a short list but might help you get better performance WITHOUT using hints/forcing. That means, as subsequent changes occur (to the data, to the indexes/statistics, to the SQL Server version) you’ll also be able to benefit from those changes!

But, the coolest part, is that these hints are there – IF we do have to use them! And, there ARE cases where the optimizer might not be able to come up with the most efficient plan; for those cases, you can force the process. So, yes, I love that they’re there. I just don’t want you to use them when you really haven’t determined what the actual root cause of the problem is…

So, yes, you can achieve syntax-based optimization…IFF you NEED it.

Rules-based Optimization (Heuristics)

I mentioned that cost-based optimization requires statistics. But what if you don’t have statistics?

Or, maybe a different way to look at it – why can’t SQL Server just use a “bunch of rules” to more-quickly optimize your queries without having to look at / analyze specific information about your data? Wouldn’t that be faster? SQL Server can do this – but, it’s often FAR from ideal. The best way to show you this is to NOT allow SQL Server to use statistics to process a query. I can show you a case where it actually works well and a MUCH MORE likely case where it doesn’t.

Heuristics are rules. Simple, STATIC / FIXED calculations. The fact that they’re simple is their benefit. No data to look at. Just a quick estimate based on the predicate. For example, less than and greater than have an internal rule of 30%. Simply put, when you run a query with a less than or greater than predicate and NO INFORMATION is known about the data in that column / predicate, SQL Server is going to use a RULE that 30% of the data will match. They’ll use this in their estimate / calculations and come up with a plan tied to this rule.

To get this to “work,” I have to first disable auto_create_statistics and check to see if there’s anything there (already) that could possibly help my query:

USE [WideWorldImporters];
GO

ALTER DATABASE [WideWorldImporters]
SET AUTO_CREATE_STATISTICS OFF;
GO

EXEC sp_helpindex '[sales].[Customers]';
EXEC sp_helpstats '[sales].[Customers]', 'all';
GO

Check the output from sp_helpindex and sp_helpstats. By default, WideWorldImporters does not have any indexes or statistics that lead with DeliveryPostalCode (or even have it as a member). If you’ve added one (or, SQL Server has auto-created one), you must drop it before running these next code samples.

For this first query, we’ll supply a zip code of 90248. The query is using a less than predicate. Without any statistics and without the ability to auto-create any, what does SQL Server estimate for the number of rows?

Columns without statistics use heuristics…
Without engineering a “PERFECT” value, they’re going to be WRONG… most of the time!

For this first query, the estimate (30% of 663 = 198.9) works well, as the actual number of rows for the query is 197. One additional item to notice, there’s a warning symbol next to the table (Customers) as well as the left-most SELECT operator. These also tell you that something’s wrong. But, here – the estimate is “correct.”

For this second query, we’ll supply a zip code of 90003. The query is the exact same except for the value of the zipcode. Without any statistics and without the ability to auto-create any, what does SQL Server estimate for the number of rows?

SELECT [c1].[CustomerID],
      [c1].[CustomerName],
      [c1].[PostalCityID],
      [c1].[DeliveryPostalCode]
 FROM [sales].[Customers] AS [c1]
 WHERE [c1].[DeliveryPostalCode] < '90003';
Columns without statistic use heuristics (simple rules). Often, they are very incorrect!

For the second query, the estimate is again 198.9 but the actual is only 1. Why? Because without statistics, the heuristic for less than (and, greater than) is 30%. Thirty perfect of 663 is 198.9. This will change as the data changes but the percent for this predicate is fixed at 30%.

And, if this query were more complicated – with joins and/or additional predicates – having incorrect information is ALREADY a problem for the subsequent steps of optimization. Yes, you might occasionally get lucky and use a value that loosely matches the heuristic but MORE LIKELY, you won’t. And, while the heuristic for less than and greater than is 30%, it’s different for BETWEEN and different again for equality. In fact, some even change based on the cardinality estimation model you’re using (for example, equality (=) has changed). Do I even care? Actually, not really! I don’t really ever WANT to use them.

So, yes, SQL Server can use rules-based optimization…but, only when it doesn’t have better information.

I don’t want heuristics; I want statistics!

Statistics are one of the FEW places in SQL Server where having MORE of them can be beneficial. No, I’m not saying to create one for every column of your table but there are some cases where I might pre-create statistics. OK, that’s a post for another day!

So, WHY statistics are important for cost-based optimization?

Cost-based Optimization

What does cost-based optimization actually do? Simply put, SQL Server QUICKLY gets a rough estimate of how much data will be processed. Then, using this information, it estimates the costs of different possible algorithms that could be used to access the data. Then, based on the “costs” of these algorithms, SQL Server chooses the one that it’s calculated to be the least expensive. Then, it compiles it and executes it.

This sounds great; but there are a lot of factors as to why this may or may not work well.

Most importantly, the base information used to perform the estimations (statistics) can be flawed.

  • They could be out of date
  • They could be less accurate due to sampling
  • They could be less accurate because of the table size and the limitations of statistics “blob” that SQL Server creates

Some people ask me – couldn’t SQL Server have more detailed statistics (larger histograms, etc.). Yes, they could. But, then the process of reading/accessing these larger and larger statistics would start to get more and more expensive (and take more time, more cache, etc.). Which would – in turn – make the process of optimization more and more expensive. It’s a difficult problem; there are pros and cons / trade-offs everywhere. Really, it’s not quite as simple as just “larger histograms.”

Finally, the process of optimization itself can’t possibly EVERY combination for execution / processing, can it? No. That would make the sheer process of optimization so expensive that it would be prohibitive!

In Summary:

The best way to think about the process of optimization is that SQL Server MUST find a “good plan fast.” Otherwise, the process of optimization would become so complicated and take so long, it would defeat its own purpose!

So, why are statistics soooooo important:

  • They feed into the whole process of cost-based optimization (and, you WANT cost-based optimization)
  • They need to exist for optimization otherwise you’ll be forced to use heuristics (yuck)
    • IMO – generally, I HIGHLY RECOMMEND auto_create_statistics to be ON, if you’ve turned it off
  • They need to be reasonably accurate to be give better estimates (keeping them up to date is important but some statistics might need more “help” than others staying up to date)
  • They might need even further help (with HOW they’re updated, WHEN they’re updated, or even with ADDITIONAL statistics)

STATISTICS ARE THE KEY TO BETTER OPTIMIZATION AND THEREFORE BETTER PERFORMANCE!
(but, they’re far from perfect!)

I hope that motivates you to consider learning a bit more about statistics. They’re actually easier than you might think AND (obviously) they’re really, really important!

Where can you get more information?

Whitepaper: There’s a VERY dated, but still has some great concepts, whitepaper on Statistics here: Statistics Used by the Query Optimizer in Microsoft SQL Server 2008

Online Training with Pluralsight: I don’t [yet] have a course just on statistics but – out of necessity – I talk about them in some of my other courses. This course: SQL Server: Optimizing Ad Hoc Statement Performance has the best discussion. Specifically, be sure to review Module 3: Estimates and Selectivity.

TIP: If you’re not a Pluralsight subscriber, shoot an email to Paul at SQLskills dot com and tell him I sent you there for a PS code. That will give you 30 days to check out our SQLskills content on Pluralsight!

In-Person Events: And, we have some in-person events coming up where you can certainly learn a lot more about SQL Server, and specifically statistics:

SQLBits in London – March 31-April 4, 2020

My FULL-DAY workshop: Statistics for Performance: Internals, Analysis, Problem Solving

SQLintersection in Orlando, FL – April 5-10, 2020

My 60 minute conference session: Statistics: Internals, Analysis, and Solutions

SQLskills Immersion Events

IEPTO1 has a few modules (2+ DAYS) related to indexing and statistics!

I HIGHLY recommend that you #NeverStopLearning and that you access as many different mediums / learning paths as possible. This stuff isn’t simple and it takes time to really digest it and LEARN it.

Thanks for reading!
-k

SQLintersection Fall 2017 – 4 weeks to go!

As we head towards our 10th SQLintersection in four weeks, we’re excited to say that it’s once again our most diverse, complete, and information-packed show yet!

One of the pieces of feedback we hear over and over is that attendees love SQLintersection because it’s a smaller, laid-back show, where you get to actually spend time talking with the presenters 1-1. That’s one of the reasons why we love the show so much; *we* get to spend time talking to attendees, rather than being mobbed by hundreds of people after a session ends. And we only pick presenters who we know personally, and who we know to be humble, approachable, and eager to help someone out.

We have 2 pre-con days at the show and with our post-con day, there are 9 full-day workshops from which to choose. We have 40 technology-focused (NOT marketing) sessions from which to choose, plus two SQL Server keynotes, multiple industry-wide keynotes by Microsoft executives, and the ever-lively closing Q&A that we record as a RunAs Radio podcast.

You’ll learn proven problem-solving techniques and technologies you can implement immediately. Our focus is around performance monitoring, troubleshooting, designing for scale and performance, cloud, as well as new features in SQL Server 2014, 2016, and 2017. It’s time to determine your 2008 / 2008 R2 migration strategy – should you upgrade to 2016/2017 directly? This is the place to figure that out!

If you’re interested in how we got here – check out some of my past posts:

  1. SQLintersection: a new year, a new conference
  2. SQLintersection’s Fall Conference – It’s all about ROI!
  3. Fall SQLintersection is coming up soon and we can’t wait!
  4. SQLintersection Conference and SQLafterDark Evening Event – what a fantastic week in Vegas

And I recorded a Microsoft Channel 9 video where I discusses the Spring show – see here.

SQLafterDark

With minimal to no marketing filler, we’ve largely kept our conference focus on ROI and technical content (performance / troubleshooting / tales-from-the-trenches with best practices on how to fix them ) but we’ve also added even more social events so that you really get time to intersect with the conference attendees and speakers. The addition of the SQL-specific, pub-quiz-style evening event SQLafterDark was wildly popular from some of our past shows and that’s returning for Fall!

 

SQLintersection: Great Speakers!

Once again, we think a great show starts with great speakers and current / useful content. All of these speakers are industry-experts that have worked in data / SQL for years (some can even boast decades) but all are still focused on consulting and working in the trenches. And, they’re good presenters! Not only will you hear useful content but you’ll do so in a way that’s digestible and applicable. Every speaker is either an MCM (Master), a SQL Server MVP, or a past/present Microsoft employee (or a combination of all three!) But, regardless of their official credentials – ALL are focused on providing the most ROI that’s possible in their session(s) and/or their workshops, and ALL have spoken for SQLintersection multiple times.

Check out this phenomenal list of speakers:

  • Aaron Bertrand – MVP, SentryOne
  • David Pless – MCM, Microsoft
  • Jes Borland, past-MVP, Microsoft
  • Jonathan Kehayias – MCM, MCM Instructor, MVP
  • Justin Randall, MVP, SentryOne
  • Kimberly L. Tripp – MCM Instructor, MVP, past Microsoft, SQLskills
  • Paul S. Randal – MCM Instructor, MVP, past Microsoft, SQLskills
  • Shep Sheppard – past Microsoft, Consultant
  • Stacia Varga, MVP, Consultant
  • Tim Chapman – MCM, Microsoft
  • Tim Radney – MVP, SQLskills

You can read everyone’s full bio on our speaker page here.

SQLintersection: When is it all happening?

The conference officially runs from Tuesday, October 31 through Thursday, November 2 with pre-conference and post-conference workshops that extend the show over a total of up to 6 full days. For the full conference, you’ll want to be there from Sunday, October 29 through Friday, November 3.

  • Sunday, October 29 – pre-con day. There are two workshops running:
    • Data Due Diligence – Developing a Strategy for BI, Analytics, and Beyond with Stacia Varga
    • Performance Troubleshooting Using Waits and Latches with Paul S. Randal
    • SQL Server 2014 and 2016 New Features and Capabilities with David Pless and Tim Chapman
  • Monday, October 30 – pre-con day. There are two workshops running:
    • Building a Modern Database Architecture with Azure with Jes Borland
    • Data Science: Introduction to Statistical Learning and Graphics with R and SQL Server with Shep Sheppard
    • Extended Events: WTF OR FTW! with Jonathan Kehayias
  • Tuesday, October 31 through Thursday, November 2 is the main conference. Conference sessions will run all day in multiple tracks:
    • Check out our sessions online here
    • Be sure to check out our cross-conference events and sessions
    • Get your pop-culture trivia and techie-SQL-trivia hat on and join us for SQLafterDark on Wednesday evening, November 1
  • Friday, November 3 is our final day with three post-conference workshops running:
    • Common SQL Server Mistakes and How to Correct Them with Tim Radney
    • SQL Server 2016 / 2017 and Power BI Reporting Solutions with David Pless
    • Very Large Tables: Optimizing Performance and Availability through Partitioning with Kimberly L. Tripp

SQLintersection: Why is it for you?

If you want practical information delivered by speakers that not-only know the technologies but are competent and consistently, highly-rated presenters – this is the show for you. You will understand the RIGHT features to troubleshoot and solve your performance and availability problems now!

Check us out: www.SQLintersection.com.

We hope to see you there!

PS – Use the discount code ‘SQLskills’ when you register and receive $50 off registration!