PASS Summit 2014: Day 1

It’s Day 1 of the PASS Summit, I’m live-blogging the keynote, and I can’t get on the internet. My DR strategy (hotspot on my phone) is failing as well. This may be late getting posted, but that’s ok. The show must go on.

Perry and I, ready for the keynotePerry and I, ready for the keynotePerry and I, ready for the keynote

Perry and I, ready for the keynote

This morning we’ll hear from the Microsoft team, including:

  • T.K. “Ranja” Rengarajan, Corporate VP of the Data Platform, Cloud & Enterprise at Microsoft
  • James Phillips, General Manager of Power BI at Microsoft
  • Joseph Sirosh, Corporate VP of Information Management and Machine Learning at Microsoft

While setting up at the blogger’s table this morning, PASS EVP of Finance and Governance Adam Jorgensen introduced me to Brendan Johnston, who joined the PASS team five weeks ago and is going to work on marketing for PASS. I had a couple minutes to hear where he came from (Sony) and what he’s been working on so far (getting better messaging out to the community, including some additional communication in the weeks leading up to the Summit). He’s been busy already! I’m interested to see how PASS works to bring in data professionals who still do not that PASS exists. It’s a challenge to bring a group that doesn’t know you exist into the fold.

8:21 AM

And we’re off, with PASS President Tom LaRock kicking off the day.

This is the 16th annual Summit. As a reminder, PASS TV is streaming the keynote today and tomorrow! Also, I’ll be on PASS TV today (Wednesday) at 2:50PM PST :) Tom points out that over 50 countries are represented here at the Summit from thousands of companies, including first-timers, veteran members, leaders, volunteers, and Microsoft employees.

“This is our community.”

“The people who are next to you will help shape your career, and you shape yours.” ES: I have Allen White on one side of me, Glenn Berry on the other.  Yes, these two have shaped my career.

Tom introduces the PASS Board and asks attendees to share thoughts, comments, questions, and concerns with them throughout the week. On Friday at 1:15PM the Board will have an open Q&A in room 307 and 308.

The Summit started in 1999. Microsoft and CA Technologies had a vision of a community that would focus on Microsoft technology. With the content and networking from that first Summit, the community began to grow. Today, the PASS community reaches over 100,000 data professionals in over 100 countries with 285 chapters world-wide. PASS has provided over 1.3 million training hours since its inception.

Tom asks where you will be in 15 years. ES: Allen says to me: Retired. Ha, he’s a funny guy :) When we grow our skill set, we grow our opportunities.

“Growth is never by mere chance; it is the result of forces working together.” –James Cash Penney

Tom states that this quote represents PASS, as PASS has become a cornerstone in our careers. We say to others “come with me, and check this out.”  Tom challenges us to get involved and grow. The best connections you can make are only a handshake away, right here, this week. Talk to someone. Connect, share, learn. Tom reminds people to not let growth end after Summit. Stay engaged throughout the year.

This year PASS has amazing opportunities for attendees –this includes 200 training sessions and instructor-led workshops, the chance to get certified, the SQLCat team, Women in Technology, Community Zone, and more.  Tom also takes a moment to mention the partners for the Summit. Without the sponsors, this would not possible. ES: Please, PLEASE, take the time to visit the sponsors this week and thank them for all that they do.  And I’ll give a shout out to one sponsor, SQL Sentry, right here for all they do for the community, including this morning’s #SQLRun which had over 100 people.  Nicely done Jes and SQL Sentry.

8:40 AM

The Microsoft team takes the stage, with Ranga up first. I had a chance to hear this team earlier in the week, and I was very impressed, especially with James Phillips. I probably shouldn’t have a bias, but his passion and past experience will serve this community well, I believe.

Ranga starts with his background – born in India, came to the US, received his education under Dr. DeWitt at Wisconsin (ES: one of my personal favorites – Dr. DeWitt, not Wisconsin) and then went to Silicon Valley and has been there since. Ranga’s family includes two daughters that he is encouraging to get into tech.  Ranga tells a story about how he loves maps (so do I!) – I find this funny for a man since they never ask for directions (ohhhh, so sorry!). Ranga loved MapQuest, and then GPS (ES: though his wife doesn’t like the GPS Jane’s voice…I don’t either, have you ever heard her pronounce Spanish street names? Hilarious).

There is an incredible number of devices proliferating right now. These devices are growing at astronomical rates. With these devices comes a lot of data generated and consumed, which is projected to grow – 41% every year. EVERY YEAR?!?!?! How will we handle this? Projecting 1.6 trillion dollars with this data based on trending. Can see personal instances where you can see how data is changing how you work and live. This a HUGE opportunity for us. At Microsoft, data is the thing that will light up future productivity. When we talk about productivity, we think it’s coming in the form of different types of data, and people wanting that data. NOW. The opportunity and challenge is take that data and make a difference for everyone. People use that data to make decisions in their life. Microsoft creates the platform to provide the insight to make those decisions.

This data culture will allow everyone to do more and achieve more in their decisions. It’s a cycle – if you capture data and manage it, and get insights from it, and you visualize and make decisions…it creates the need for more data. Microsoft experiences this now. This data platform is divided into granular areas with multiple capabilities. Ranga wants to talk about capturing and managing it. Must combine data across multiple areas (cloud, on-prem) and this platform must be comprehensive. Success from Microsoft is a solution that doesn’t require compromises, it is a culture of AND (not OR), and you CAN have it all. Microsoft can do in-memory and on disk, optimized for the hybrid environment (don’t take sides for on-prem and cloud, do you what’s right for your business!), structured and unstructured data, scale up or scale out. No limit on what you can do with this data.

Key characteristics necessary to do more and achieve more:

  • Capture diverse data
  • Achieve elastic scale
  • Maximize performance and availability
  • Simplify with cloud

Ranga talks about the technologies that can help with this:

  • Azure Document DB (NoSQL DB service, schema-free, ACID to eventual consistency)
  • Azure HDInsight (Apache Hadoop service with HBase and Storm)
  • Analytics Platform System (Polybase (SQL and Hadoop) appliance
  • Azure Search (Fully managed search service for mobile apps

For scaling, need to scale up and scale out. Microsoft has:

  • SQL Server 2014 with Windows Server 2012R2 (scale up to 640 logical cores, 64 vCPUs per VM, 1TB of memory per VM)
  • SQL Server in Azure VMs (new massive G-services VMs – base on the market)
  • Azure SQL Database (taking scale out approach – hyper-scale across thousands of DBs)

“Use the best tool for the job.”  ES: Yes. I ask “What problem are you trying to solve?”

Tracy Daugherty from Microsoft takes the change for a demo – I met him the other night, good guy. He’s talking about how to find inventory he wants to move (orange pumpkins…Halloween is over, time to get that product out the store and make way for holiday decorations!). Tracy is using Azure DB for the inventory, and showing JSON code that gets uploaded to update customer facing pages. When capacity increases, need to be able to support that – and can be done via elastic scale. Tracy talks about sharding which can be time-based, based on size, etc. It’s effectively one database, but broken out and spread across multiple shards.

This example is one of combining multiple products into one solution – get the right tools. We are at the beginning of the amazing possibilities here. All of these services are available in preview right now. Tracy notes: a new feature was GA’d last week (made generally available). Tracy wants to take main database and make sure it’s replicated across regions via geo-replication. Tracy picks a server over in Asia, and it replicates the existing database across the world in three easy steps.

Ranga says Microsoft provide the best up-time for any solution – four nines (99.99%). SQL DB is on a tear. The same engine is used for both SQL DB and SQL Server. There are a million databases running in SQL DB, and Microsoft is now truly understanding what data professionals go through. All fixes deployed to SQL DB get deployed to the next box version of SQL Server. SQL Server 2014 is getting great reception, the in-memory OLTP is incredible – no one else is able to do that. One engine can handle multiple workloads. Microsoft is taking advantages of all the things happening in Azure. On-prem you can connect to the cloud in a trusted manner which will allow you to extend your solution naturally.

Think about the world of differently. We have looked at data as carefully orchestrated. The new world says, take the data, put it in the right engine, and leave it in the cloud. Know that you can get insight from that data at any time. Azure is now becoming the new data layer. Ranga mentions Stack Overflow (ES: I see Brent do a fist pump..but then some frustration as Stack’s solution is misrepresented) and all that they are able to do with their commodity hardware and SSDs – it scales out well. They use the software in very clever ways. This is awesome to see. Ranga also mentions Samsung who has seen 24x improvement in performance with in-memory OLTP.

Ranga announces a major update to Azure SQL DB later this year that will be in preview. It represents an incredible milestone for Microsoft. This includes:

  • Leap in TSQL compatibility
  • Larger index handling
  • Parallel queries
  • Extended events (ES: YES!)
  • In-memory ColumnStore for data marts

More capabilities will roll out across multiple environments.

9:18 AM

Ranga brings Mike Zwilling, one of the Microsoft engineers, up on stage. A little background – holiday season, expecting an increase in transactions. People also want more real time insight. What if you could run analytics directly on OLTP data? (ES: Funny enough, I know companies that do that right now.) Mike gives a URL that viewers can go to and “buy” something – this is going to generate workload. Mike then shows the performance live. He shows the live view on the OLTP data, and shows performance via PerfMon.  (ES: I still love PerfMon.) Mike points out that the supporting table is using in-memory OLTP and nonclustered columnstore.

Mike talks about new functionality coming – the ability to stretch a table into Azure. (ES: I find this EXTREMELY exciting, I have a customer that benefit from this right now.) History tables, for example, can be stretched to exist the local server as well as into Azure. The older data is moved, behind the scenes, to Azure.  (ES: Awesome. The only thing I don’t love? There’s a DMV to look at rows moved to Azure named db_db_stretch_stats. Dear MS: you need to take stats out of that name. Please.) Mike demos how, in the event of a failover, you can restore the local database, and when it finished it is synchronized with Azure to bring it to the same point in time as what’s on-prem. Pretty cool. Ranga explains that the stretch concept is a logical – have an on-prem database that you extend, and it occurs in an invisible way.

9:28 AM

Joseph Sirosh takes the stage, he spent about 9 years at Amazon before joining Microsoft about a year ago. PASS is an amazing community. Communities come together to learn. Joseph wants to do the wave. (ES: Huh. Didn’t see that one coming. We’re going to do “PASS community rocks” as wave. We’ll do this if you want Joseph, three times. I’m a cheerleader and I’m not loving it. Where are everyone’s spirit fingers? Golf clap from Allen.)

On to machine learning – this is something Joseph is EXTREMELY passionate about. He mentions Storm. (ES: Did I ever tell you that my husband wanted to name our son Storm? That was voted down immediately.) Azure machine learning is about learning patterns in the past and anticipating what will happen in the future. Joseph brings Sanjay Soni to the stage. He asks how many people love Christmas shopping. I don’t. It stresses me out. More with Pier 1 and something about last minute shopping (that sounds familiar). Sanjay wants to figure out what items to put on end-caps. He’s going to talk about using Kinect sensors. Gather data about where shoppers spend their time. What products they are lingering around. This is cool, but as a consumer, I don’t like it. The heat map was created using PowerMap in Excel. Looking at last three days of data – behind the scenes. Azure data factory – something new in the Azure portal. SSIS to the power of X in the cloud – all kinds of data sources coming in.   Browser-based authoring environment. 1500 lines of code with a third-party app to do the analysis that 100 lines of JSON code does. JSON is a definite buzz technology this week. I do appreciate Sanjay’s enthusiasm. Rugs and furniture was the hot data to put on the aisles. (ES: SERIOUSLY? Steve Jones tells me to lighten up. This from a man wearing a red hate with stripes on it. Maybe he needs to be more serious?!? :) Ok, so real-time the hot items are candles (?) and bar & wine. I told you…not rugs and furniture. Wine is NOT surprising. The product is real-time dashboard. It’s the information coming from the sensors, data going into an Azure database. With only 14 lines of code can do stream analytics (versus 1000 from other vendors). Streaming is incredibly powerful.

“Let’s predict the future so we can change the future.”

Azure machine learning – look at loyalty customer information. Predict what a customer will buy based on their last purchase. (ES: Hm. I’m going to start messing with the data and buy random stuff.  (From where is this crankiness coming from?!?!) Are you getting junk emails from retailers that accurately predict what you’re going to buy? Is it wine? Ok.)  Last demo from Sanjay, on his phone, he’s a member of the loyalty program. The app welcomes him, and gives him a list of products he might be interested in, based on previous purchases. This includes beer mugs. The app called into machine learning. Fascinating. And really scary.

9:49 AM

James finally takes the stage. He is new to Microsoft, here for just over two years. Came from the Valley and spent that time building two companies. He is running the data experiences team. Running the Power teams and analytics. How do we bring data to people?  (ES: ok, I admit that I’m fading)

James mentions that he was watching the Twitter feed backstage. That’s risky, and impressive. Do you change the angle of your talk?

One thing that makes Microsoft different: the ability to tie back from the clout to on-prem. Microsoft is looking to build a data culture using PowerBI multiple capabilities. James moves into a demo, which he does himself. Props for that. Sticking with Pier1 for the demo (he kind of has to, but recognizes the feedback that’s been given on Twitter). James wants to get a pulse on the business – does this using a PowerBI dashboard – it’s a diagnostics component. James wants to understand why there’s a trend with candle purchases. Me too. James searches for a variety of pieces of information he wants to see, pins that information to his dashboard, and then arranges it in a way that the wants to see. I admit, the flexibility of the dashboard is pretty slick. The challenge is understanding how to get there.

(ES: Seriously, I love that James is doing his own demo.)

10:02 AM

James finishes up, and Ranga comes back. He thanks Pier1 for their help with the demos. One more thing from Ranga…the Azure Machine learning is available for free on a trial basis today. Ranga wants us to use that today. Again, he states that Microsoft is building an amazing data platform. Think of what we’re seeing today as a comprehensive platform.

“Be the hero of your data-driven business. Think through the one thing that captured your imagination, and then go connect and learn with that. This is the time for data. The world is excited about data. You are the guardians of data. Together we can change the world. We can do more together!”



Let’s Get Ready to Summit!

Friends…the conference season is upon us (and could I have thought up a cheesier title for this post?!).   In less than two weeks the PASS Summit kicks off in Seattle and I am very much looking forward to a week of all things SQL Server. I will be in Seattle a full week, but it seems that’s never enough time to see everyone.  I’ll be posting here and on Twitter – with hopefully more pictures this year!  If you are not attending Summit, let me know if there’s something you want to hear about, or see, and I’ll see what I can do.  If you will be there and want to say hi or catch up, let me know! Even better, let me tell you where I know I will be to increase our chances of running into each other :)

Sunday, November 2

  • The MVP Summit kicks off in Bellevue, and most of my day will be spent absorbing as much as I can about what’s coming in SQL Server vNext.

Monday, November 3

  • Another day of MVP Summit…my eyes might be glazed over by the end of the day.

Tuesday, November 4

  • My pre-con with Jonathan kicks off at 8:30AM! If you haven’t signed up for a pre-con, there is still time… Jon and I are covering everything about Extended Events we can possibly can fit into one day: Everything You Never Wanted to Know About Extended Events. We finalized our slide deck last week when we were in Chicago teaching IEPTO2, and my only concern is whether we can get through all the material (maybe not?!). Whether you’ve been a devout user of Trace and Profiler for years, or have never used it in your life, this session will have everything you need to embrace Extended Events and be off and running in SQL 2012 and 2014. I am *so* looking forward to this day.  Oh, and check out the preview that Jon and I did with PASS…Jon’s answer to the first question still makes me laugh.
  • In the evening I plan to attend the Summit Welcome Reception – one of the best times to see A LOT of people and at least say hello.

Wednesday, November 5

  • I am kicking off the day with #sqlrun, organized by the best-roomate-ever, Jes Borland.
  • During Wednesday’s keynote, I will be sitting at the blogger’s table, live-blogging the event (any guesses on what vNext news the Microsoft team will share?).
  • After the keynote I plan to attend sessions throughout the day and will probably spend some time in the Community Zone.

Thursday, November 6

  • I will again be at the blogger’s table for Rimma Nehme’s keynote. I am looking forward to Rimma’s talk! I met her briefly before Dr. DeWitt’s keynote last year; she helped him put together that session and then surprised him by attending and lending her support.
  • I will be sitting at the blogger’s table during the WIT luncheon as well, and I love the slight change in format for this year. The past few years the luncheon has had a panel discussion so several individuals have the chance to share their stories, which I find fascinating and inspiring. However, I always struggle with an “action item” – what can I do to make a difference? Therefore, I’m extremely interested in hearing Kimberly Bryant talk about her non-profit, Black Girls CODE, and hopefully getting ideas for ways I can get more involved.
  • Thursday ends with my regular session which starts at 4:45PM: Five Execution Plan Patterns to Watch For. Thursday is a busy day. I think this is the first time I’ve had a session land in the application development track, which is cool. If you’re not sure where to start in a query plan, this is a session for you. I assume that attendees already know the basics of plans, and how to capture them, so we’ll just jump right into talking about the patterns I often see. As usual, I’ll have lots of demos :)

Friday, November 7

  • Sleep in. I’m kidding! I have nothing planned for Friday so I will be attending sessions, meeting people, and catching up with anyone I haven’t seen yet.

Saturday, November 8

  • Flying home…for once I don’t have a flight at o-dark-hundred and I get home in time to put my kids to bed before I unpack and tell my husband all about the week. Ok…let’s be honest, I’ll probably fall asleep with the kids as I’ll be so exhausted. But it will be worth it.

And just in case you’re wondering, here’s the Summit session schedule for the entire SQLskills team (unfortunately, some sessions overlap):


Pre-Con: Everything You Never Wanted to Know About Extended Events [Erin and Jonathan]
Pre-Con: Performance Troubleshooting Using Waits and Latches [Paul]


Dealing With Multipurpose Procs and PSP the Right Way! [Kimberly] 4:30PM – 5:45PM
Analyzing I/O Subsystem Performance [Glenn] 4:30PM – 5:45PM


Advanced Data Recovery Techniques [Paul] 1:30PM – 2:45PM
Solving Complex Problems With Extended Events [Jonathan] 1:30 – 2:45PM
Five Query Plan Patterns to Watch For [Erin] 4:45PM – 6:00PM


Going Asynchronous With Service Broker [Jonathan] 9:45AM – 11:00AM

I look forward to seeing good friends and meeting new ones in a couple weeks.  If we’ve chatted on Twitter or email and haven’t met in person (or if we haven’t chatted but you follow my blog!), let me know if you’ll be at Summit so we can connect.  Safe travels everyone, see you soon!

edit added October 22, 2014, 3:40PM EDT:

p.s. For those of you do attend Summit, two requests.  First, introduce yourself to people.  When you sit at breakfast or lunch, or when you find a seat in a session, take a second and say hello.  You never know who you might meet and what might develop.  Second, please thank the volunteers and the PASS staff for the time and effort they put in to create an amazing event.  It takes a lot of people to make the Summit run smoothly, and many of those people are volunteers, and a good number of them work behind the scenes and are not visible to most of the community.

What I Wish I Had Known Sooner…as a DBA

Mike Walsh posted 4 Attitudes I Wish I Had Earlier as a DBA, and tagged a few people to respond. Here goes…

  1. Surround yourself with people who are smarter than you. Interacting with colleagues who have had different experiences and know more about certain topics is the fastest way to grow, both personally and professionally. While it’s nice to be known as a person who can answer almost every question, where’s the challenge (and fun) in that?
  2. Take vacations, and don’t take your laptop with you. I have taken my laptop on way too many family trips. This doesn’t serve anyone. I’m not fully engaged in my time with my family, I don’t give my full attention to my customers because I’m stressing out about not spending time with my family, and my co-workers are just thinking, “What the heck?” If Jon or Glenn go on vacation, I am more than happy to fill in and do whatever is needed because they need that break. We all do. So stop worrying. You have earned this time. Leave the laptop at home.   Turn off your phone. Stop checking Facebook and Twitter. The world will still be there when you return.
  3. Take the time to mentor others. You didn’t get to this spot on your own, and that new DBA is never going to be able fill in for you when you’re on vacation and want to be completely disconnected (see above). You don’t just have to mentor one person, you can mentor many, in different ways and it doesn’t have to take a significant amount of time.  At my previous job, when I worked with anyone in Technical Support I would explain what we were doing, why, and talk them through the process.  This took maybe an extra 10 to 15 minutes.  If they asked questions, I knew immediately they wanted to learn more and I repeated that process every time I helped them going forward.  Eventually, they could troubleshoot basic database issues without me, freeing up my time.  At a leadership course I attended the facilitator said, “You should always be trying to work yourself out of your current position.” That means you’re teaching someone how to take over yours.
  4. Save every script you write. I love writing T-SQL. Sometimes I think I should have been a developer. In the beginning, I didn’t save many scripts, so when a similar problem came up I had to start over. At first I didn’t mind, because I got to write some code and I’d try to remember how I did it last time and how to make it better. Then I didn’t have enough time, and writing that code became a bottleneck. Also: organize your scripts. Everyone has different methods, one of mine is to use the same first word to name scripts with a common task. For example, Check_Baselines.sql, Check_Backups.sql, Create_DB.sql, Create_RG.sql. Find a system, stick with it, start saving.

I’m not tagging anyone in this post by name, but if you’re thinking “I wish she had tagged me” then you’ve just been tagged.

Helping First Time Presenters

Nic Cain (@SirSQL) has a blog post that I highly recommend reading if you attend User Group meetings or SQLSaturdays: An Open Letter To SQLSaturday & User Group Organizers.  I think Nic tells a good story with a very relevant example of how a new speaker could have a negative first speaking experience.  And he has a great call to action for organizers and presenters.

I suggest that we raise that call to action to include veteran speakers.  For example…my local user group is the Ohio North SQL Server User Group.  To anyone who is also a member of this group and wants present at a local meeting: let me know.  I am more than happy to help you get started, provide feedback, and be there for your first session.  Further, I’m attending SQLSaturday #304 in Indianapolis next month.  If you’re presenting there for the first time and want me to be there for your session, let me know!

This is an open offer, with no expiration, and I do hope that someone takes me up on my offer.  And I would be remiss if I did not mention the following individuals who were there for my some of my first sessions and supported me:

  • Allen White (@SQLRunr) – my first session was at our user group in December 2010, and Allen stood in the back the entire time, in my line of sight in case I needed him
  • Mike Walsh (@mike_walsh) – with whom I co-presented at my first SQLSaturday in Feburary 2011, something I would recommend new speakers consider (it’s not a great fit for everyone, but I enjoyed presenting with Mike)
  • Kendra Little (@kendra_little) – who sat in on my first solo session at that SQLSaturday in 2011, and laughed at my jokes :)
  • Rob Farley (@rob_farley) – even though Rob fell asleep during my first solo Summit session (Friday afternoon, end of the week, jet lag, and too many late nights, etc. :), he provided feedback I still remember to this day
  • Ted Krueger (@onpnt) – he helped me fine tune one of my favorite sessions (during a speaker dinner no less…I still owe Jes for that) and then sat through it and helped fill in some gaps when I needed help

To those of you that have been speaking for a while, I encourage you to seek out potential speakers – whether it’s in the community or at your office – and offer your help.  And for new speakers, please do not be afraid to ask for guidance.  Everyone starts at the beginning, with the same pile of nerves and fears about what could happen.  There are so many people who are willing to help make the process easier – seek them out, and have fun!


PASS 2014 Nomination Committee

Late last Friday afternoon PASS announced the results of the 2014 Nomination Committee election, and thanks to many of you, I was selected for the committee.  I appreciate all of the PASS members that voted (539 people this time), whether the vote was for me or not.  Every active PASS member with an updated profile can vote in elections, and I do believe it’s important to cast your vote each and every time.  It’s one opportunity to help direct the path of the organization.

I am fortunate to now have this opportunity to help shape “how things work” within PASS.  We have our first NomCom meeting this week, and I’m looking forward to getting plans in place for the summer and preparing for candidate application reviews and interviews…and of course, the process.  There is a significant portion of the election process that we will review, and I expect passionate discussion from my fellow committee members.

I will share relevant NomCom information on my blog as I can.  For those of you thinking about running, I recommend reviewing the Board of Elections Timeline, and note that the call for applications opens on August 6th (just 56 days away, in case that’s the way your brain works :) ).  If you’re thinking about running, I’d encourage you to do the following:

  • Start talking to current and past Board members.  Get an idea of what the job requires, not just in terms of time, but in terms of what they really do.  Does it match your expectations?  Is that where you want to invest your time within PASS?  Can you make the time commitment?
  • Start reading up how PASS works, as an organization.  Get familiar with the website, read the latest Board meeting minutes.
  • Talk to other members of the community about your interest, seek out individuals who would support you and ask if they would write a recommendation on your behalf.
  • Be prepared to explain why you’re running for the Board, you’ll get asked many times, many different ways.

Thanks again to all who voted!




PASS 2014 Nomination Committee Campaign Information

Hello PASS members! This post is the landing page for information related to my campaign for the PASS 2014 Nomination Committee and has a fair bit of information, so thanks in advance for reading!

If you haven’t reviewed my application, submitted to PASS, and my reasons for running, please visit my PASS Nomination Committee page.  On that page you can read a bit more about me, and also download my application.  To see the other candidates, please check out the main NomCom page.

My PASS application has information about why I’m running and my experience.  But just in case you haven’t read that, the quick summary is:  I was on the 2013 Nomination Committee and am running again this year because there are tasks we started as a committee last year that I feel I should help finish, and because I believe my experience as a committee member last year will provide valuable insight during candidate review.  I greatly value the opportunity to represent the PASS community and would be honored to help select the slate for this year’s Board of Director nominees.

If you want to review my NomCom-related posts from last year, I’ve provided the links below:

PASS Board of Directors Applications

Thoughts on the Nomination Committee and PASS Board of Elections

As I write additional posts this week, I will add those links below.

Finally, the PASS site is hosting a discussion page for the Nomination Committee elections here.  Please feel free to ask questions there, or in the comments below.  You can also contact me directly via email or say hi on Twitter.  I look forward to hearing from you, and please don’t forget to vote on June 3rd, 4th, 5th, or 6th!


Further Testing with Automatic Updates to Statistics

A new question on an older post, Understanding When Statistics Will Automatically Update, came in this week.  If you look in the comments, you see that Rajendra asks:

…I want know, whether schema change of a table (adding new / (deleting an existing) column) also qualifies for UPDATE STATS?

It’s an interesting question that I’d never considered before.  I had an instinct about the answer, but the scientist in me wanted to test and prove it, so here we go.  The scenario and code are similar to my original post, so this should be a quick read for many of you :)

The Setup

Start with a copy of the AdventureWorks2012 database, which you can download from CodePlex, on a 2012 SP1 instance, and confirm that the Auto Update Statistics option is enabled:

IF (SELECT COUNT(*) FROM [sys].[databases] WHERE [name] = 'AdventureWorks2012' AND [is_auto_create_stats_on] = 0) = 0

Create a copy of the Sales.SalesOrderDemo table for our testing, and add the clustered index and one nonclustered index:

USE [AdventureWorks2012];

INTO [Sales].[TestSalesOrderDetail]
FROM [Sales].[SalesOrderDetail];

CREATE CLUSTERED INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID] ON [Sales].[TestSalesOrderDetail] ([SalesOrderID], [SalesOrderDetailID]);

CREATE NONCLUSTERED INDEX [IX_TestSalesOrderDetail_ProductID] ON [Sales].[TestSalesOrderDetail] ([ProductID]);

Validate the current statistics and modifications; the statistics should be current since we just created the indexes, and modifications should be 0 since we haven’t run any inserts, updates, or deletes:

Note: I refer to this query throughout the post as "the sys.stats query", rather than including it multiple times!
OBJECT_NAME([sp].[object_id]) AS "Table",
[sp].[stats_id] AS "Statistic ID",
[s].[name] AS "Statistic",
[sp].[last_updated] AS "Last Updated",
[sp].[modification_counter] AS "Modifications"
FROM [sys].[stats] AS [s]
OUTER APPLY sys.dm_db_stats_properties ([s].[object_id],[s].[stats_id]) AS [sp]
WHERE [s].[object_id] = OBJECT_ID(N'Sales.TestSalesOrderDetail');
Current statistics and no modifications

Current statistics and no modifications

Excellent, we’re starting with a clean slate.

The Test

Let’s first add a little bit of data (1000 rows), to up our modification count:

BULK INSERT [AdventureWorks2012].[Sales].[TestSalesOrderDetail]
FROM 'S:\SQLStuff\Dropbox\Statistics\Data\sod_1000.txt'
DATAFILETYPE = 'native',

When we run our sys.stats query again, we see the 1000 modifications for both indexes, because both indexes had rows added:

Statistics and modifications after adding 1000 rows

Statistics and modifications after adding 1000 rows

Great.  Now add a new column to the table…let’s pretend that we have a new ProductID, but we want to keep the existing one for historical reasons, so our new column is, creatively, NewProductID:

ALTER TABLE [Sales].[TestSalesOrderDetail] ADD [NewProductID] INT;

Did adding this column cause statistics to update?  This was our main question, and if we run our sys.stats query we see:

Statistics and modifications after adding new column

Statistics and modifications after adding new column

Nothing changed…  Adding a new column to the table does not invoke an automatic update to statistics.  Nor should it.  We haven’t modified data in any existing statistics.  We would certainly expect a change in query plans, for existing queries that are modified to use this new column, but adding a new column to a table does not cause any existing statistics for the table to update.

More Testing (because it’s good practice to keep proving out what we expect)

Just for fun, let’s see what happens if we modify some more data:

UPDATE Sales.TestSalesOrderDetail SET ProductID = 717 WHERE SalesOrderID = 75123;

If we check sys.stats, we see that our nonclustered index, IX_TestSalesOrderDetail_ProductID, has three more modifications than it had previously, but not enough data has changed overall for statistics to be invalidated.

Statisitics and modifications after updating 3 ProductIDs

Statisitics and modifications after updating 3 ProductIDs

What happens if we update NewProductID?

UPDATE [Sales].[TestSalesOrderDetail]
SET [NewProductID] = [ProductID] + 1000;

Well, we updated 122317 rows, but only the NewProductID column, which isn’t part of either the clustered index or nonclustered index key.  Therefore, no update to statistics:

Statistics and modifications after updating NewProductID

Statistics and modifications after updating NewProductID

To prove this out further, create a nonclustered index on NewProductID:

CREATE NONCLUSTERED INDEX [NCI_TestSalesOrderDetail_NewProductID] ON [Sales].[TestSalesOrderDetail] ([NewProductID]);

And if we verify the statistics and modifications, we see that our new NCI has zero modifications, as expected.

Statistics and updates after adding the nonclustered index on NewProductID

Statistics and updates after adding the nonclustered index on NewProductID

We’ll make another massive update to NewProductID, because the first update wasn’t correct, and then check sys.stats again:

UPDATE [Sales].[TestSalesOrderDetail]
SET [NewProductID] = [ProductID] + 10000;
Statistics and modifications after updating NewProductID a second time

Statistics and modifications after updating NewProductID a second time

Now we see the modifications – all 122317 of them.  If we query the NewProductID column now, we should invoke an automatic update to statistics, because they are invalid:

SELECT NewProductID, SalesOrderID
FROM [Sales].[TestSalesOrderDetail]
WHERE NewProductID = 10718;
Statistics and modifications after an auto update was invoked

Statistics and modifications after an auto update was invoked

And there you have it.  Adding a column to a table will not cause an automatic update to statistics to occur.  Automatic updates occur only when the threshold for modifications has been exceeded for a statistics key.

clean up code
USE [AdventureWorks2012];
DROP TABLE [Sales].[TestSalesOrderDetail];

SQLSaturday Cleveland – Just Around the Corner

It seems like this winter is dragging along, especially if you live anywhere where it snows, as I do here in Cleveland.  But at the same, I cannot believe that our SQLSaturday event is less than two weeks away!  If you live within a couple hours of Cleveland, I strongly recommend that you check out our event, which is on Saturday, February 8, 2014 in Westlake, OH.  You can find the schedule here, but let me list a few highlights for you:

  • 42 sessions
  • 38 different speakers
  • 18 sponsors and counting
  • 13 SQL Server MVPs
  • 4 SQL Server MCMs
  • 2 Microsoft PFEs

We have an amazing line-up of speakers and I am so thrilled with the content our event will provide to our local SQL Server community.  We have speakers coming from all over the US – Washington, Colorado, Minnesota, Georgia, and Massachusetts – to name just a few states from which colleagues will be traveling.    Thank you to our speakers that will brave the cold, snow, and wind to come to Cleveland (I’m hoping for a heat wave where we get above freezing, but let’s be honest, Mother Nature hasn’t been very kind this year).

In addition to our fantastic day-of sessions, we also have two pre-cons for which you can still register!  Argenis Fernandez will be hosting A Day of SQL Server Internals and Data Recovery, and Allen White will present Automate and Manage SQL Server with PowerShell.  Registration details can be found on our main SQLSaturday page, and the cost for these pre-cons is a great deal.  Both speakers have been selected for pre-cons at the PASS Summit previously, and the rate for those events is much higher.  Don’t miss this opportunity!

If you have any questions about our SQLSaturday, please don’t hesitate to contact us.  We hope to see you next Saturday, February 8th, and until then, stay warm!

Statistics Starters Presentation: Scripts, the Database, and Some Answers

This past Tuesday I presented “Statistics Starters” for the PASS DBA Fundamentals Virtual Chapter.  You can read the abstract here, and as you may have guessed from the title, it was a 200 level session on statistics appropriate for anyone who knew of statistics in SQL Server, but wasn’t exactly sure how they were created, how they were updated, how to view them, etc.  Over 300 people attended (thank you!) and I had some great questions.  I plan to answer the questions in a series of posts, starting with this one.

Question: Can we get a copy of the scripts in your demo?  And where can we get a copy of the database you used?

Answer: The scripts, slide deck, and database can be downloaded from the SQLskills demos and databases resource page.  The database I used for these demos, which I plan to continue to use for presentations, is the Lahman baseball database.  While the AdventureWorks database is well known and widely-used, I admit that I have a hard time thinking of good Sales and Product examples in my demos.  I know baseball a lot better than I know sales :)

Question: Are we able to rollback newly created statistics if the plans created after an update are bad?

Answer: (edited 2014-01-23 2:45 pm) Great question.  The answer is no kind of.  This is one feature that exists in Oracle that I would be interested in seeing in SQL Server.  Oracle provides the ability to save and restore statistics.  You can even export statistics from one database and import them into another.  Pretty cool…potentially dangerous, but still cool; however, it is not possible to restore statistics in SQL Server if you save out the stat stream first, and then update the statistic with the stream.  Thanks to my colleague Bob Beauchemin (b) for pointing out how it can be done (I learn something new every day).  Johan Bijnens also messaged me to point out that you can script out statistics – which I always forget.  The next step is to update statistics with stats_stream that you script out.  Take note: it is a hack.  Thomas Kejser blogged the steps here, and he has a fantastic disclaimer at the beginning because the method described is unsupported.  Before I write any more about the “feature”, I’m going to do a little testing and hacking of my own.  More to come!

Question: Why should I use the UPDATE STATISTICS command…isn’t sp_updatestats always the best option?

Answer: See my post Understanding What sp_updatestats Really Updates to see why I don’t recommend using sp_updatestats.

Question: Is it good to update statistics after rebuilding an index?

Answer: This is not recommended.  Remember that rebuilding an index updates statistics with a full scan – if you run a command to update statistics after a rebuild, you are wasting resources and the statistics may update with a smaller sample.   This is sometimes not ideal, because depending on the sample, it can provide less accurate information to the optimizer (not always, but it’s possible).

Question: Is it good practice to update statistics if I reorganize the index?

Answer: In general, yes, because reorganizing an index does not update statistics.  I recommend that you pro-actively manage your statistics, and not rely solely on automatic updates (assuming you have the AUTO UPDATE STATISTICS option enabled for your database).  If you are only reorganizing your indexes, make sure that you have another step or job that does update statistics.  If you either rebuild or reorg (or do nothing) based on the level of fragmentation, then you need to make sure you manage statistics accordingly (e.g., don’t update if a rebuild has occurred, do update if you’ve reorganized).

I’ll answer a few more questions in my next post, thanks for reading!

Refer a Friend, Get a Gift Card!

Yesterday I made this video:

Cold, snow, and wind in CLE

Notice at the end I mention Tampa and IE0IE0 is our Immersion Event for Accidental DBAs and it’s a newer course that Jonathan and I teach.

Not an Accidental DBA?  Doesn’t matter, please keep reading :)

If you refer a friend or colleague for IE0 or IEHW – that’s Glenn’s two-day hardware classYOU get a $50 Amazon gift card!

See, I would bet that the Accidental DBA/Involuntary DBA/Junior DBA/person-who’s-managing-the-SQL-Server-instance-but-isn’t-quite-sure-what-they’re-doing does not read my blog.  They may not know about SQLskills, they might not even know who Paul and Kimberly are (it does happen…NO ONE in my extended family has ever heard of them, can you believe it?).

But you know that person needs training.  And we can help.

So reach out to a fellow member of your user group, a colleague at work, or someone you know is new to SQL Server, and let them know the Accidental DBA training that we provide.  You can send them this link to learn more about our training and Immersion Events, and if they sign up for IE0, they can learn how to keep their SQL Server up and running, and you can buy something you probably don’t need but really want (and you won’t have to share it because we won’t tell anyone you got a gift card).

And…in case you’ve been eying an IE course…today (January 3) is the last day for discounted pricing for Tampa classes!  Book now, or book for one our May events in Chicago.

We hope to see you, and a friend, in 2014!