Statistics Starters Presentation: Scripts, the Database, and Some Answers

This past Tuesday I presented “Statistics Starters” for the PASS DBA Fundamentals Virtual Chapter.  You can read the abstract here, and as you may have guessed from the title, it was a 200 level session on statistics appropriate for anyone who knew of statistics in SQL Server, but wasn’t exactly sure how they were created, how they were updated, how to view them, etc.  Over 300 people attended (thank you!) and I had some great questions.  I plan to answer the questions in a series of posts, starting with this one.

Question: Can we get a copy of the scripts in your demo?  And where can we get a copy of the database you used?

Answer: The scripts, slide deck, and database can be downloaded from the SQLskills demos and databases resource page.  The database I used for these demos, which I plan to continue to use for presentations, is the Lahman baseball database.  While the AdventureWorks database is well known and widely-used, I admit that I have a hard time thinking of good Sales and Product examples in my demos.  I know baseball a lot better than I know sales icon smile Statistics Starters Presentation: Scripts, the Database, and Some Answers

Question: Are we able to rollback newly created statistics if the plans created after an update are bad?

Answer: (edited 2014-01-23 2:45 pm) Great question.  The answer is no kind of.  This is one feature that exists in Oracle that I would be interested in seeing in SQL Server.  Oracle provides the ability to save and restore statistics.  You can even export statistics from one database and import them into another.  Pretty cool…potentially dangerous, but still cool; however, it is not possible to restore statistics in SQL Server if you save out the stat stream first, and then update the statistic with the stream.  Thanks to my colleague Bob Beauchemin (b) for pointing out how it can be done (I learn something new every day).  Johan Bijnens also messaged me to point out that you can script out statistics – which I always forget.  The next step is to update statistics with stats_stream that you script out.  Take note: it is a hack.  Thomas Kejser blogged the steps here, and he has a fantastic disclaimer at the beginning because the method described is unsupported.  Before I write any more about the “feature”, I’m going to do a little testing and hacking of my own.  More to come!

Question: Why should I use the UPDATE STATISTICS command…isn’t sp_updatestats always the best option?

Answer: See my post Understanding What sp_updatestats Really Updates to see why I don’t recommend using sp_updatestats.

Question: Is it good to update statistics after rebuilding an index?

Answer: This is not recommended.  Remember that rebuilding an index updates statistics with a full scan – if you run a command to update statistics after a rebuild, you are wasting resources and the statistics may update with a smaller sample.   This is sometimes not ideal, because depending on the sample, it can provide less accurate information to the optimizer (not always, but it’s possible).

Question: Is it good practice to update statistics if I reorganize the index?

Answer: In general, yes, because reorganizing an index does not update statistics.  I recommend that you pro-actively manage your statistics, and not rely solely on automatic updates (assuming you have the AUTO UPDATE STATISTICS option enabled for your database).  If you are only reorganizing your indexes, make sure that you have another step or job that does update statistics.  If you either rebuild or reorg (or do nothing) based on the level of fragmentation, then you need to make sure you manage statistics accordingly (e.g., don’t update if a rebuild has occurred, do update if you’ve reorganized).

I’ll answer a few more questions in my next post, thanks for reading!

Refer a Friend, Get a Gift Card!

Yesterday I made this video:

Cold, snow, and wind in CLE

Notice at the end I mention Tampa and IE0IE0 is our Immersion Event for Accidental DBAs and it’s a newer course that Jonathan and I teach.

Not an Accidental DBA?  Doesn’t matter, please keep reading icon smile Refer a Friend, Get a Gift Card!

If you refer a friend or colleague for IE0 or IEHW – that’s Glenn’s two-day hardware classYOU get a $50 Amazon gift card!

See, I would bet that the Accidental DBA/Involuntary DBA/Junior DBA/person-who’s-managing-the-SQL-Server-instance-but-isn’t-quite-sure-what-they’re-doing does not read my blog.  They may not know about SQLskills, they might not even know who Paul and Kimberly are (it does happen…NO ONE in my extended family has ever heard of them, can you believe it?).

But you know that person needs training.  And we can help.

So reach out to a fellow member of your user group, a colleague at work, or someone you know is new to SQL Server, and let them know the Accidental DBA training that we provide.  You can send them this link to learn more about our training and Immersion Events, and if they sign up for IE0, they can learn how to keep their SQL Server up and running, and you can buy something you probably don’t need but really want (and you won’t have to share it because we won’t tell anyone you got a gift card).

And…in case you’ve been eying an IE course…today (January 3) is the last day for discounted pricing for Tampa classes!  Book now, or book for one our May events in Chicago.

We hope to see you, and a friend, in 2014!

An Early Present

It’s Friday. December 20…the last day of the school year for my kids and we wake up to no power, rain, and 40 degree temps. On the way to school my 6-year old daughter threw up in the car. Huh…one of those days.

But you know what’s great about today? Voting for the first annual Tribal Awards opens. Not sure what those are? Check out one of these posts at SQLServerCentral or Simple-Talk. Why am I mentioning it? (And why am I asking so many questions this morning?!) Well, I was nominated for something…which is really, really cool. I consider it a huge compliment to have individuals who attended my Extended Events session at the PASS Summit nominate it for best presentation, and it’s even more humbling when you see the other speakers on the ballot. I did not see everyone else’s session (except Dr. DeWitt’s, which I live-blogged), but I know Rob Farley, Steve Jones, and Neil Hambly to be great speakers with deep technical knowledge. I find myself in good company, and I am honored. And I have Rob to thank for helping me make my presentation better. The day before my session at Summit, I spent about an hour with Rob talking through it, asking for his feedback and insight. Thank you, Rob!

Finally, I’d like to give a shout out to my SQLskills team members for their nominations. Glenn, Kimberly, and I each have one, and Paul and Jonathan each have three. I am so incredibly proud of our team and everything that we’ve accomplished this year.

Congratulations to all nominees in all categories, and if you want to vote, you can use this link. And in case I don’t post again before the end of the year, I hope you all have a wonderful holiday. Thanks for reading, and here’s to an amazing 2014!

Extended Events Usage Survey

Last week at the PASS Summit I presented my Making the Leap from Profiler to Extended Events session, and one of the questions I always ask at the beginning is how many people have used Profiler (usually most of the attendees) and how many have used Extended Events (very few).  I’m giving this session again this week and next, and I thought it would be interesting to get feedback from the community as a whole about Profiler and XE use based on SQL Server version.  So in Paul Randal style, here’s my first poll.


I will summarize the results in a post next week.  Thanks for your help!

PASS Summit 2013: Final Recap

I haven’t traditionally written recap posts for the PASS Summit, but this year was just phenomenal, and I think that down the road I would regret it if I didn’t take a few minutes to summarize the highlights.

Summit1 1024x768 PASS Summit 2013: Final Recap

Perry and I arrive at the PASS Summit

In no particular order…

#SQLRun

On Wednesday morning about 70-some people congregated in downtown Charlotte for the now-traditional #SQLRun.  Organized by Jes Borland (whom I really don’t have enough wonderful adjectives to describe), it was a 3.4 mile run with fellow SQL runners in the complete dark.   It was the perfect way to start Day 1 of the Summit.  A run, when it includes friends, is never bad.  I met a couple new people that I saw throughout the week, proving again that common interests outside SQL Server help facilitate those relationships within PASS.  Whatever your passion, I encourage you to find people with the same non-SQL interests at conferences and SQLSaturdays.  You just never know who you’ll meet.

My Sessions

My first session was Wednesday morning after the keynote, and it was Making the Leap from Profiler to Extended Events.

Summit2 1024x768 PASS Summit 2013: Final Recap

Perry checking out the crowd before my XE session

This was one of my favorite topics to cover this year, and based on feedback throughout the week, it hit home with many people.  Over 300 attendees made it to the session (the picture above was taken 15 minutes before I started), and I had great questions throughout and finished right on time.  In case you missed it, I’ll be giving a shorter version of the same session this Wednesday at noon EDT for the DBA Virtual Chapter (sign up here) and again next week at SQLIntersection.  Scripts can be downloaded from the PASS site, or from the SQLskills site under the Conference Demos heading.

My second session was Friday morning at 8 AM, How to Avoid Living at Work: Lessons from Working from Home.  Despite the early hour, we had a good number of attendees and a great discussion.  As I mentioned in a post back in August, I’m still adjusting, but it’s going well icon smile PASS Summit 2013: Final Recap

The WIT Panel

I had the honor of sitting on the WIT Panel on Thursday, and even though I probably said less than the other panelists, I had the opportunity to address a couple great questions (including one from an audience member).

Summit4 PASS Summit 2013: Final Recap

2013 WIT Panel (L to R: Gail Shaw, Kevin Kline, Cindy Gross, Rob Farley, me, and moderator Mickey Stuewe)

You can view the recording here, and since Thursday I’ve had a lot of time to reflect on what else I could have said, particularly when I answered the question from the audience member.  I want to include it here, for reference, and if you watch the video it starts at 59:11:

I had an interesting experience.  I was walking around the Expo yesterday and after having a short conversation with someone, someone said to me, well, you are a woman working in technology, you are a foreigner, you are a former collegiate athlete, and you are young.  You have all this working against you, how are going to make it in this industry?

My reply to her was that I would have said, “How am I NOT going to make it?”  Because here’s the thing: YOU decide what you can and cannot do, what you will and will not do.  You are in complete control of your destiny.  People will doubt you.  People will tell you that you aren’t good enough, don’t know enough, that you’re not “something enough”.  Don’t listen to them.  Know who you are…and if you don’t know, figure it out.  I firmly believe that once you fully accept the person that you are, and you like that person, that nothing will stop you.  Have confidence in yourself and then go forth and conquer.  And to the guy that said that?  There’s one part of me that wants to kick his @$$.  The other part of me feels sorry for him.  He has no idea what he’s up against.

The SQLskills Team

A conference bonus is that I get to see the SQLskills team.  It’s not often we’re all together because we’re scattered throughout the US.  I had time with every member of the team, including a couple dinners which really provide time to catch up in a relaxed setting.  I also moderated Kimberly’s session, Skewed Data, Poor Cardinality Estimates, and Plans Gone Bad, on Thursday, which was a lot of fun.  If you have any interest in statistics, go watch her session on PASS TV.

Summit3 PASS Summit 2013: Final Recap

Me and Kimberly – notice how tall I am?! (photo credit @AmazingScotch)

SQL Sentry

I cannot say enough good things about SQL Sentry.  They sponsored many events at the Summit including (and if I miss one please let me know):

  • Quizbowl at the Welcome Reception
  • Karaoke on Tuesday night
  • #SQLRun on Wednesday morning (they marked the path and provided t-shirts to those who registered early)
  • WIT Panel Luncheon (including a cool USB hub for swag)
  • The SQL Sentry shuttle on Tuesday, Wednesday, and Thursday nights that provided transportation for Summit attendees around Charlotte

In addition to being a community leader, SQL Sentry is simply a remarkable company.  I have met many members of their team, and it’s a close-knit group that values their customers, and just puts out great products.  I have been a fan of the company and its team since I joined the community, and they raised the bar even further this year.  Well done.

Dr. DeWitt’s Keynote

On Thursday morning Dr. DeWitt returned to the PASS Summit…I actually have no idea how many times he’s given a talk at the PASS Summit, but I know that for each of the past four years that I have been there, he’s been there.  This year his topic was Hekaton and of course it did not disappoint.

summit5 768x1024 PASS Summit 2013: Final Recap

Perry listening to Dr. DeWitt talk about Hekaton while I type rapidly

I live-blogged his session and was able to capture a fair bit of his content.  Dr. DeWitt explains complex database topics in a way that many understand – he’s not just a smart guy, he’s a great teacher.  Thank you Dr. DeWitt for your session, and thank you PASS for bringing him back again.  Can we do it again next year?

My Peeps

I cannot list everyone here.  You would all just end up looking for your name icon smile PASS Summit 2013: Final Recap But seriously, there are so many factors that contribute to a successful Summit for me, and one of them is most certainly seeing friends and meeting new people.  Whether we had a 5 minute chat, discussed a technical problem and how to solve it, or enjoyed a beer at some point: thank you for being part of the SQL community, and for being part of my world.  I feel so fortunate that I have a group of individuals, within my professional field, I call true friends.

Ok, ok…I have to give a special shout out to Johan Bijnens who brought me chocolate all the way from Belgium, and Aaron Bertrand who brought me Kinder Eggs from Canada.  Thank you both for feeding my addiction icon smile PASS Summit 2013: Final Recap

I’m already thinking about next year’s Summit, but I hope to see you all before then.  Have a great week, and good luck catching up on email!

p.s. One of my favorite pictures from the week, courtesy of Jimmy May.  And if you’re wondering why the heck this Perry the Platypus stuffed animal shows up in all these pictures…well, I take him with me on trips and then take pictures to send back to my kids.  They think it’s hilarious.  Ok…I do too.

summit6 1024x768 PASS Summit 2013: Final Recap

Perry and me before my XE session

PASS Summit 2013: Day 2

And day 2 at this year’s PASS Summit starts with a sweet surprise from Aaron Bertrand ( b | t ), Kinder eggs.  It promises to be a good day.

Today is Dr. DeWitt‘s keynote (did I mention that he’s a University of Michigan alum? Go Blue!), and here we go…

8:15 AM

Douglas McDowell, outgoing Vice-President of Finance starts with information about the PASS budget.  Summit is the largest source of revenue for PASS, the Business Analytics Conference provided a nice contribution to the budget this year (over $100,000), and PASS has a one million dollars in reserve.

Last year PASS spent 7.6 million dollars on the SQL Server community, with the largest amount spent on the Summit.  The second largest cost was the BA Conference.  Per Douglas, Headquarters (HQ) is a critical investment for PASS.  Right now the IT department has 3 individuals maintaining 520 websites.  (And you thought you were a busy DBA!)  One initiative for PASS this year, and going forward, is an international expansion, which took about 30% of the budget this past year.  Overall, PASS is a very good financial place – and thanks to Douglas for all his work as a Board member.

8:31 AM

Bill Graziano takes the stage to thank Douglas for his time on the Board, and also Rob Farley who moves off the Board this year.  Bill asked Rushabh to come on stage…Rushabh has been on the Board of Directors for 8 years.  He’s held the positions of VP of Marketing, Executive VP of Finance, and President.

8:34 AM

Incoming PASS President Tom LaRock takes the stage, and starts with an omage to Justin Timberlake and Jimmy Fallon’s hashtag video.  Awesome.  Tom introduces the incoming PASS BoD:

  • Thomas LaRock (President)
  • Adam Jorgensen (Executive VP of Finance)
  • Denise McInerney (VP of Marketing)
  • Bill Graziano (Immediate Past President)
  • Jen Stirrup (EMEA seat)
  • Tim Ford (US seat)
  • Amy Lewis (open seat)

Tom has announced the PASS BA Conference – it will be May 7-9, 2014 in CA.  Next year’s Summit will be November 4-7, 2014 in Seattle, WA.

The WIT Lunch is today – and I’m on the panel so I hope to see you there!

8:41

Dr. DeWitt takes the stage, and the topic is Hekaton: Why, What, and How.

I was able to meet the co-author of this session, Rimma Nehme, before today’s keynote – she’s a Senior Researcher in his lab (which is apparently in an old Kroger grocery store building on the Madison campus).

DeWitt says that Hekaton is an OLTP rocket ship.  The marketing team has renamed Hekaton to In-Memory OLTP, and DeWitt wants people to vote on Twitter.  I am Team #Hekaton…it just sounds cooler (and it’s much easier to type).

Covering three things: What, the Why and How of Hekaton.

Hekaton is memory optimized, but durable.  It’s a very high performance OLTP engine, but can be used for more than that.  It’s fully integrated into SQL Server 2014, not a bolt-on.  Architected for modern CPUs.  (Slide deck will be posted later, I’ll post the link when I have it.)

Why Hekaton?  Many OLTP databases now fit in memory.  There are certain kinds of workloads that SQL Server can no longer meet.  Historically, OLTP performance has been improved by better software (driven by TPC benchmarks), CPU performance doubling every 2 years, existing DBMS software maturing.  DeWitt says we’ve done as much as we can with mainline products.  CPUs are not getting faster – that well is drive.

Hekaton was picked because the goal was 100X improvements.  Not quite there yet.  Customers have seen 15-20X.  If you’re burning 1 million instructions/sec and only yields 100 TPS.  If you want to get to 10,000 TPS, but reduce number of instructions/sec to a value that’s just not possible.

Getting to 100X with the flood of new products available (e.g. Oracle-TimesTen, IBM-SolidDB, Volt-VoltDB), including Hekaton.  Why a new engine?  Why not just pin all the tables in memory?  That won’t do the trick.  Performance would still be limited by the use of:

  • latches for shared data structures such as the buffer pool and lock table
  • locking as the concurrency control mechanism
  • interpretation of query plans

Implications of a shared buffer pool is a consideration.  Assume the pool is empty.  Query 1 comes along and needs page 7.  Is page 7 in the pool?  No, a frame is allocated, the query has to wait while the IO occurs.  The IO completes and Query 1 can continue.  Remember that the data structure is a shared structure.  If Query 2 checks for page 7, the buffer manager will report where it is, but Query 2 will be blocked by the latch on page 7 until Query 1 is finished.

(sidebar: a transaction or query only holds 2 latches at a time)

There can be significant contention for latches on “hot” pages in the buffer pool.  This can be a big performance hit.  All “shared” data must be protected with latches.

The need for concurrency control…  Query 1: A = A + 100.  Database actions: Read A, update the value, write A.  Query 2: A = A + 500.  Database actions: Read A, update the value, write A.  If A was originally 1000, after both queries, it will be 1600.  This represents a serial schedule.

Two phase locking developed by Dr. Jim Gray – which is the standard.  Two simple rules:

  1. Before access, query must acquire “appropriate” lock type from Lock Manager
  2. Once a query releases a lock, no further locks can be acquired

If these rules are followed, resulting schedule of action is equivalent to some serial (good) schedule.  Dr. Gray got Turin Award for proof of this, one of two given to database scientists.

(sidebar: can I get a fellowship in Dr. DeWitt’s lab?  Seriously…)

Still need a deadlock detection/resolution mechanism also needed (wanted to get rid of this for Hekaton…which is why it’s been a 5 year effort).

After a query is parsed and optimized, get an execution plan which is given to a query interpreter that walks the tree of operators and executes them in a particular order.  When the database is on disk, the cost of interpreting tree is insignificant.

All these three things (concurrency, query optimization and latches) are why you can’t get to 100X with current implementations.

Currently in SQL Server, shared data structures use latches.  Concurency control is done via locking, and query execution is via interpretation.

With Hekaton, shared data structures are lock-free.  For concurrency control, versions with timestamps plus optimistic concurrency control is used.  For query execution, compile into DLL that loads when queries are executed.  This is what will get us to 100X.

SQL Server has 3 query engines – relational, apollo (column store), and Hekaton.

To use Hekaton, create a memory-optimized table.  Two kinds of durability: schema-only and schema-and-data.  (Every Hekaton table must have a primary key index – can be hash or range.  Also have new b-tree in Hekaton, b-w tree, which gives high performance on range queries.) Some schema limitations for V1.  Once you’ve created the table, then populate the table.  Run a SELECT INTO statement, or do a BULK LOAD from a file.  Just need to make sure it’s going to fit into memory.  Then, use the table.  Via standard ad-hoc T-SQL query interface (termed “interop”), up to 3X performance boost.  Adapt, recompile and execute T-SQL SPs, get 5X-30X improvement.

Query optimization is the hardest part of relational databases, per Dr. DeWitt.  Lock-free data structures truly are rocket science – they make query optimization look simple.

Lock-free data structures invented by Maurice Herlihy at Brown University – got him elected to the National Academy of Engineering (which is a big deal).

When you think lock-free, think about latch free – it allows multiple processes with threads to access the same data structure without blocking.  Dr. DeWitt has a great slide showing performance differences with multiple threads for latch vs. lock-free.  He mentioned that it was a tough slide to animate (and if you see it, you’ll understand why…I was actually wondering how he did it).  With lock-free (aka latch-free?) – an update does not block reader threads – there is no performance hits.  Every shared data structure in Hekaton was built around this functionality.

In Hekaton, now have a different concurrency control.  It’s optimistic:

  • Conflicts are assumed to be rare
  • Transactions run to “completion” without blocking or setting locks
  • Conflicts detected during a Validation phase

Second component of concurrency control is multiversion – updating a row creates a NEW version of the row.  It works really well when you do this in memory.  The third component is timestamps – every row version has a timestamp:

  • Each row version has an associated time range
  • transactions use their begin timestamp to select correct version
  • timestamps also used to create a total order for transaction to obtain equivalent of serial order

This approach drastically reduces number of threads – dozens not hundreds.

Transaction phases in Hekaton:

  • Read committed versions of rows
  • Updates create new tentative versions
  • Track read set, write set, and scan set

When the transaction is done, goes through second phase which is validation, this is where the concurrency control mechanism decides whether transaction can commit.  Reaches commit point…

When transaction begins, current clock value is used as Begin_TS for transaction.  At the start of the validation phase, transaction is given unique End_TS.  It is used during validation to determine whether it is safe to commit the transaction.  Begin_TS are NOT unique, End_TS are ALWAYS unique.

Hekaton tables have either hash or range index on unique key.  Rows allocated space from SQL’s heap storage.  Additional indices (hash or range) on other attributes.

Hekaton row format – all rows tagged with Begin_TS and End_TS.  Latest version has infinity on the End_TS ( most recent version of the row).  The Begin_TS is the End_TS of the inserting transaction.  The End_TS is the logical time when the row was deleted and/or replaced with a new version.

Multiversioning Example – Have transaction to increase value by 10,000.  A new version of the row is created.  Pointers are used to link the rows together in memory.  Don’t think about them being contiguous in memory.  The transaction puts its signature (transaction ID) on each row (End_TS of initial row, Begin_TS of new row).  When the transaction is later validated and committed, for all rows it updated/created/deleted, it will re-access each row with that transaction ID and replace it with the End_TS.  NO LATCHES ARE USED!  NO LOCKS ARE SET!  NO BLOCKING OF ANY TRANSACTIONS!  (I’m not not yelling, neither is Dr. DeWitt.)  This is timestamps and versioning – used on rows AND transactions.  Always create new versions of rows when doing updates.  Per Dr. DeWitt, competitors are not going to have the same level of performance.

9:27 AM

Optimistic multi-version – this is the lock/latch-free mechanism in Hekaton (Dr. DeWitt says it so fast it’s hard to catch it icon smile PASS Summit 2013: Day 2

When is it safe to discard “old” versions of a row?  When the begin timestamp of the oldest query in the system is ahead of the last End_TS…older rows no longer needed.  Hekaton garbage collection is non-blocking, cooperative, incremental, parallel, and self-throttling. It has minimal impact on performance.  It happens completely under the covers.

Steps:

  1. Updates create new version of each updated row
  2. Transaction use  combination of time stamps and versions for concurrency control
  3. A transaction is allowed to read only versions of rows whose “valid” time overlaps the Begin_TS of the Xi.
  4. Transactoins essentially never block (WAIT, there’s a caveat here that Dr. DeWitt is glossing over…hm…)

Validation Phase

  1. Transaction obtains a unique End_TS
  2. Determine if transaction can be safely committed.

Validation steps depend on the isolation level of the transaction – “new” isolation levels for Hekaton.

Read Stability key idea: check that each version read is still “visible” at the end of the transaction using End_TS.

Phantom Avoidance requires a repeat each scan checking whether new versions have become visible since the transaction started.  And if any scan returns additional rows, validation fails.  This sounds expensive, but, keep in mind all rows are in memory.  It is only performed for transaction running at a serializable level, and it is still a LOT cheaper than acquiring and releasing locks.

There is a also a post-processing phase with 3 sub-phases (which I couldn’t type fast enough…oy).

Checkpoints & Recovery – the data is not lost, have a normal checkpoint process, use logs to generate checkpoints (holds data during shutdown).  Restart/recovery – starts by loading a known checkpoint and scans log to recover all work since then, fully integrated with HA (giving readable secondaries of memory optimized tables).

Standard method for query execution on a relational system is complicated…and slow-ish.  Regular T-SQL access is Interop.  Queries can access and update both Hekaton and disk-resident tables.  Interpreted execution limits performance.  When you put this all into a DLL, get much faster execution.

Native plan generation – query through parse tree, produces logical plan and then feeds it into optimizer which produces physical plan – but these are likely to be different for Hekaton (different algorithms and cost model).  Take the physical plan and then translate it into C code (the ugliest C code you’ve ever seen, no function calls, per DeWitt), goes into C compiler and produces DLL – which is very slim, only what’s needed to run the SP.  The DLL is then invoked and loaded – it’s stored in the catalog.

9:50

Dr. DeWitt just gave a bunch of quantitative data showing performance improvement in terms of transaction/sec, instructions/sec and CPU…I couldn’t copy it fast enough. icon smile PASS Summit 2013: Day 2

For more details, there is a session at 1:30 PM in Room 208 A-B there is a DBA Focused Session, tomorrow at 10:15 AM there is a dev-focused session.

Dr. DeWitt takes some time to thank his team.  This is something I really appreciate about him.  He brings Rimma Nehme on stage (who surprised him by showing up today) and gives nothing but praise…ending with a slide that has a picture of Rimma and a bubble: “Let the woman drive.”  Love it!

You can download DeWitt’s deck and supporting files here.

Thank you Dr. DeWitt!

PASS Summit 2013: Day 1

Greetings from Charlotte, NC where Day 1 of the PASS Summit is about to kick off!  I just got back from this year’s #SQLRun, sponsored by SQL Sentry, and I can’t believe the size to which the event has grown since Jes ( b | t ) organized the first run two years ago.  It was great to see so many fellow runners at 6 AM this morning!

Today’s keynote kicks off with Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft, and one announcement you can probably expect is the release of CTP2 for SQL Server 2014, which you can download here (thank you Magne Fretheim for the link!).  Check out the SQL Server 2014 CTP2 Product Guide and Release Notes for more details.  If you cannot attend this morning’s keynote because you’re still recovering from the run, or recovering from last night, or because you’re not here, just tune in to PASSTV to watch it live.  Stay tuned to PASSTV to catch specific sessions live – note that each session will have a hashtag to follow, to which you can also Tweet questions.

Speaking of sessions, I’m up first after the keynote first this morning with Making the Leap from Profiler to Extended Events in Room 217A.  If you’ve been a long-time user of Profiler and Trace, but know you need to get familiar with Extended Events (as much as you might dread it!), please join me.  It’s a demo-filled session that walks you through how to re-wire your approach to capturing live data.  We’re not diving too deep into XE, I’ll leave that to Jonathan’s two Extended Events courses on Pluralsight, but this session will you get you over that initial “I don’t know where start” feeling.  Hope to see you there!

During this morning’s keynote I also expect to see some members from the PASS Board of Directors – notably outgoing President Bill Graziano and incoming President Thomas LaRock.  If you’re interested, there is Q&A with the PASS BoD on Friday from 9:15 AM to 10:45 AM.  Not familiar with PASS?  Visit the Community Zone this week for more info!

I won’t be at the blogger’s table this morning to live blog the Keynote, but will be present tomorrow for the keynote from Dr. David DeWitt, which I’m looking forward to immensely.  Until then, I hope you see at my session or elsewhere at Summit.  Say hi, send a tweet, and enjoy the day at Summit!

Thoughts on the Nomination Committee and PASS Board Elections

The PASS Nomination Committee finished interviewing those who applied for the Board last Wednesday, August 28, 2013.  In the event that you’re not familiar with the NomCom and what it does, there’s a short paragraph on the Nomination Committee page.  After the interviews, the candidates were ranked based on their written applications and their interviews with the NomCom.  The candidates were informed of their final ranking, and have the opportunity to withdraw their name from consideration before the slate is publicly announced.

The Process

When I applied to be on the NomCom, I had an understanding of what it entailed, but I didn’t know all the details.  For a bit of NomCom history, I would encourage you to review the PASS archive about the Election Review Committee (ERC).  Within the archive there is a Documents page, and the two documents there outline the process for the NomCom to follow when reviewing applications and interviewing candidates.  The NomCom NDA limits what I can disclose about the events of this year, but I can tell you that it was extremely eye-opening, and worth every minute I spent working with the other members of the team.  I learned even more about PASS as a whole, what important qualities are needed for a Board member, and gained great insight into the applicants who applied.  And of course the whole process got me thinking…

Thoughts on the Candidates

The slate for the three open Board positions will be announced in the coming weeks, and voting will begin soon after.  The applications originally submitted by the candidates will be available on the PASS site, and candidates will have additional opportunities to answer questions and campaign.

During the NomCom process I was reminded that people communicate best in different ways.  Some write extremely well, others speak extremely well.  Some are fortunate to excel at both.

To the candidates: I encourage you to know your strengths, and then take advantage of them.  If you can articulate your thoughts, your vision, and your qualifications in writing so that your passion also shines through: do it.  Get your content published – on the PASS site, on your blog, by someone who supports you.  If you can explain your opinions and ideas better by speaking: do it.  Host a conference call, a Google hangout, something, to have your voice heard (literally).

Thoughts on Voting

To anyone who is not running, I encourage you to exercise your right as an active PASS member to vote.

But I more strongly encourage you to take the time to find out as much as you can about each candidate before you vote.  I say this because it’s possible that an election can become a popularity contest, and you may or may not know every candidate who is running.  You may know candidates personally, you may only know by them name.  You may have seen a candidate speak at an event.  You may recognize a candidate name because you’ve seen it in the PASS Connector.  You may not know anyone who’s on the slate (this was certainly the case for me the first year I voted in a PASS election).  Whether you know a candidate or not, before you vote you do need to know what they think about PASS, what they’ve done, where they see PASS going, and whether they will be a good leader.  Take advantage of the information the candidates share to learn as much you can about each individual.

Why You Should Vote

The easy answer, when someone asks, “Why should I vote?” is because you’re a member of the organization and it’s your right.  Some may argue it’s your duty.

I don’t know why you should vote.  But I’ll tell you why I’m voting.

Almost six years ago at a SQL Server conference – which was not the PASS Summit – I realized that I was not the only person who really loved working with SQL Server.  I discovered there were more like me.  Just over three years ago I finally realized there was a professional organization for all the people who loved working with SQL Server: PASS.  It took me over two years to find PASS.  Two.  And why does that matter?  Because in the three-plus years since I found PASS I have become a better professional.  Yes, I have found a lot more people like me and I have developed friendships that extend beyond the walls of an organization and the functions of an application.  And for that I’m eternally grateful.  But I am just as thankful for the opportunities I have encountered as a PASS member, and for the improvements I’ve made technically and professionally.

So for me, I vote to invest in my career, my future.  I can’t do it alone.  I cannot become better by just sitting in my corner of the world.  I need my co-workers, my colleagues, my user group, my fellow PASS members, my #sqlfamily.  And to keep that group moving forward, we need a strong and focused Board.  I’ll vote for that any day of the week and twice on Sunday.

PASS Board of Directors Applications

Yesterday, August 7, 2013, was the last day to submit applications for the PASS Board of Directors election.  I have the honor of serving on the Nomination Committee this year, and while there is very little I can disclose, I do want to take a moment to extend a heartfelt thank you to those that submitted an application for a Board of Directors position.

Thank you for taking the time to complete the application, I know it can be daunting.

Thank you for taking the time to think through what you want to see happen as a Board member.  Whether you are elected or not, I hope that you will still endeavor to make those things happen.

Thank you for talking to your friends and colleagues and asking them to be references for you (and thank you to those that agreed to do so).

And thank you, in advance, for committing additional time to meet with the NomCom to share your story and your goals.

When you run for any office, or any board, in any arena, you take a risk.  You become vulnerable, and you put your own name out there for discussion.  That takes courage, and I applaud all of you for stepping forward and taking a risk so that you may serve the SQL Server community as a member of the PASS Board.  I wish all of you the best, and I am confident that three very qualified individuals will fill the Board positions this year.  Good luck!

What I Know For Sure…After One Year at SQLskills

Today, August 1, 2013, marks my one year anniversary as a member of the SQLskills team.  Simply put, it’s been a great year.  Challenging in many ways, exhausting at times, but absolutely what I wanted (and expected) to be doing in this role.  Over the past year I’ve been asked many times, “How’s your new job?!”  It’s not-so-new now, but since I didn’t blog much about the non-technical side of life during the past year, I thought I’d use this post to tell you about my new job.  Specifically, the five most important things I learned during the past year.

Talking, out loud, is important

Working for SQLskills means I work remotely, therefore I work from home.  This was quite an adjustment.  I knew it would significantly change the rhythm of each day, but I had no idea what it would look like.  I’ve considered writing about it many times, but a few months ago Merrill Aldrich wrote a post, Telecommuting, Month 9, that explained – very well – many of my own thoughts and observations.  In the comments my friend Jes Borland, who also works from home, clearly articulates one challenge of working remotely.

I found out that what I miss is being able to say, out loud, “I have this idea. What do you think of it?” and getting immediate feedback.

Yes.  YES!  I love the solitude of my office…having the entire house to myself.  Some days I don’t even turn on music or anything for background noise.  But when I want to talk about something, I want to talk about it right now…out loud (funny sidebar, this video makes me laugh…let’s taco ‘bout it).  Trying to discuss ideas over email or chat isn’t the same.  It doesn’t create the same excitement, or the cross-pollination of ideas that occurs during a true conversation.  As Joe says, “it’s where the magic happens.”  It’s true.

Half the battle is realizing the problem.  The other half is figuring out what to do about it.  I make notes about what I want to discuss, and then fire off an email or set up a WebEx.  Jon and I have had numerous late night WebEx sessions where we talk through something, and suddenly at 1 AM I find myself with a litany of post-it notes spread across my desk and ideas churning in my head.  I love those moments.  They are not as organic or spontaneous as they were in an office setting, but I can still make them happen with a little effort.

When theory meets execution

SQL Server is a vast product, and many of us have seen and done a lot…but we haven’t seen and done everything.  As such, there are scenarios and tasks that we’ve read about, that make sense, but we haven’t actually walked through on our own.  We know what’s required to set up an availability group.  We have the checklist, the steps are logical, we can estimate how long it will take, and we’ve read every supporting blog post and technical article we can find.  But I’ve yet to find anything that replaces the actual execution of the task.  In some cases, what’s expected is actually what happens.  And that’s a wonderful thing.  But there are other times where what is planned is not what occurs.  I like this quote I just read in Bob Knight’s book, The Power of Negative Thinking:

Don’t be caught thinking something is going to work just because you think it’s going to work.

Planning beats repairing.

Theory and execution are not always same – it’s certainly nice when they are and when the implementation goes as planned.  But don’t rely on it.  Ultimately, practice and preparation are required to consistently ensure success.

Nothing can replace experience

If you’ve worked in technology a while, you know that a core skill is troubleshooting.  And to be good at troubleshooting, you must have an approach, a methodology that you follow as you work through an issue.  But to be really good at troubleshooting, you also need to recognize patterns.

I came into this role with many years of experience troubleshooting database issues.  But I spent the majority of that time looking at the same database, across different customer installations (if you don’t know my background, I used to work for a software vendor and as part of my job I supported the application database).  I became familiar with the usual database-related problems, and knew how to quickly identify and fix them.  We typically call this pattern matching, and I found it well explained in this excerpt from The Sports Gene, where it’s defined as “chunking.”  From the article:

… rather than grappling with a large number of individual pieces, experts unconsciously group information into a smaller number of meaningful chunks based on patterns they have seen before.

In the past year I’ve seen a lot of new patterns.  And some days were extremely frustrating because I would look at a problem, get stuck, and then ask another member of team to look at the issue with me.  It was usually Jon, who would often look at the issue for a couple minutes and then say, “Oh it’s this.”  It was infuriating.  And I would ask Jon how he knew that was the problem.  And the first time I asked him I think he thought I was questioning whether he was right.  But in fact, I just wanted to know how he figured it out so quickly.  His response?  “I’ve seen it before.  Well maybe not this exact thing, but something similar.”  It’s pattern matching.  It’s chunking.  It’s experience.  You cannot read about it.  You cannot talk about it.  You just have to go get it.  And be patient.

I have a great team

I actually have two great teams: my team at work and my team at home.  I work with individuals who are experts in the SQL Server Community.  Their support is unwavering.  Their willingness to help knows no limits.  I am always appreciative for the time and the knowledge they share, and I am proud to not just work with them, but to call them friends.  To the SQLskills team: thank you for a fantastic first year – I look forward to what’s ahead!  (And happy birthday Glenn!)

My team at home is Team Stellato: my husband Nick and my two kids.  The first year of any job is an adventure, and for me there’s a lot of overhead – a lot of thought around what I’m doing, what I need to finish, what’s next, etc.  And much of that continues when I’m not at my desk.  I haven’t always been 100% present this past year and over last 12 months I’ve said, I don’t know how many times, that I’m still figuring it out.  And I am still figuring it out.  It’s hard to balance everything.  It’s hard to stay in the moment all the time.  I firmly believe I can do it, but I also believe I can do it better than I’m doing it today.  Thank you Nick for just being you – being supportive, understanding, and patient, and for making me laugh.  We’ll get there.  And thank you to my kids for trying to understand that being at home and being available aren’t always the same thing.  This year I will do better at being present during our time.

Make time for the gym

The last item to mention is something I need to be successful, but it may not be necessary for everyone.  It’s exercise.  It seems pretty straight-forward, right?  For some reason it’s a continual battle I fight in my head.  I don’t always have enough hours in the day to get done what I want to get done, so something has to give.  I’m very quick to sacrifice a run, a spin class, or a hot yoga session.  My though process is: “I will need 30/60/90 minutes for that workout.  That’s time I could spend working/hanging out with my family/having lunch with a friend.”  But when I give up that work out multiple days in a row, my mental and emotional health suffer…more than my physical health.  A work out clears my head – solutions come faster, ideas flow easier, I am more focused when I need to be – and it reduces my stress.  It’s ironic if you think about it…making time to work out introduces this stress (“Can I do everything?!”) but the act of working out makes everything else I need to do so much easier.  And it’s not about how far I run, or how many classes I get to in a week.  It’s the workout itself – whether it’s an intense 50 minutes of spin, a 1.5 mile run while the kids bike, or an hour in the yoga studio.

Year 2 and beyond

So, how’s my new job?  It’s great.  In many ways it is exactly what I expected, and in other ways it’s not – and that’s not a bad thing.  I didn’t anticipate every challenge I would have in working from home, but I am not afraid of them, nor do I think they’re unconquerable.  I have learned how to step back and critically look at where I am in my career, and evaluate what’s working well and what isn’t.  And this is working well.  It’s hard – hard because I am learning a ton and juggling many things, and that can be exhausting.  But I wouldn’t want it any other way.  I hate to be bored!  I absolutely love working with people who know so much, because it reminds me how much there is to know and what I can learn.  It is a fantastic motivator for me.  And the SQLskills team is fun.  A little weird at times icon smile What I Know For Sure…After One Year at SQLskills but very fun and extremely supportive.  I cannot explain the importance of that, for me, enough.  And so begins year 2, let’s see what adventures this brings…IE0 anyone?!!