SQLskills at the PASS Summit

The PASS Summit is coming up this fall in Seattle, and it will be here before you know it.  Though I want to mention that I am thoroughly enjoying summer and really not ready for pumpkin anything!  I’m returning to Summit this year, as are Jonathan and Glenn.  We are all looking forward to the conference and with the schedule announced this week, I wanted to include some more details about  the full day pre-conference sessions that Jonathan and I are each presenting.  Lucky Jonathan, we are presenting together as part of the Developer Learning Pathway along with fellow Clevelander Bert Wagner (one of my favorite peeps, ask him about coffee!).

I’ve included links to all of our sessions below, and since Jonathan and I are teaching on consecutive days, you can attend both our workshops 😊

My pre-conference session will dive deep into how to leverage Query Store to detect and correct performance issues, both with existing SQL Server 2016 and newer environments, and Jonathan’s session will teach you how to properly use the new features in SQL Server 2017 and SQL Server 2019 to get better performance.  If you’re planning an upgrade OR moving to Azure, both of these workshops are applicable.  We think these sessions complement each other very well, and even though we aren’t presenting together, we will reference content/concepts discussed in the other session.  Do you have to attend both to make sense of it all?  Definitely not.  But we do have a ton of great information to cover and are already working on new demos to showcase the latest and greatest in SQL Server.

If you have questions, please email us!  You can email me or Jonathan directly to ask about our pre-conference sessions if you don’t have a clear picture after reading the abstract.  We know that there are a lot of great workshops at Summit, and we definitely want to make sure you learn what will help you most.  Glenn, Jonathan and I look forward to seeing you in November!

Erin’s workshop on Monday, November 4, 2019: Performance Tuning with Query Store in SQL Server

Jonathan’s workshop on Tuesday, November 5, 2019: Bigger Hardware or Better Code and Design?

Erin’s general session on Wednesday, November 6, 2019 at 3:15PM: Understanding Execution Plans

Glenn’s half-day session on Wednesday, November 6, 2019 at 3:15PM: Dr. DMV’s Troubleshooting Toolkit

Jonathan’s general session on Thursday, November 7, 2019 at 10:45 AM: Eliminating Anti-Patterns and RBAR for Faster Performance

Glenn’s general session on Friday, November 8, 2019 at 9:30AM: Hardware 301: Choosing Database Hardware for SQL Server 2019

 

 

PASS Summit 2017: Day 2

Day 2 is starting here at PASS Summit in Seattle – watch this space for updates the next hour and a half!

Grant Fritchey is kicking off day 2!  I’m a big fan of Grant.  He talks about the community and how people should take advantage of what’s available here at Summit.  I couldn’t agree more – I wouldn’t be where I am in my career without this conference, nor would I have the friends I do all over the world.  Grant hands it over to Denise McInerney, VP of Marketing, and Denise shares her journey within PASS (she’s a 6 year Board Member).  Denise continues driving the point about the value of this conference and PASS.  She then talks about the volunteers for PASS, and announces the winner of the PASSion Award.  This year’s winner is Roberto Fonseca.

Denise is talking about the survey feedback from the past couple years, and session evaluations at PASS.  *PLEASE* complete these by the end of next week (if not earlier) – both PASS and the speakers truly value this feedback.  If you provide additional feedback there is a Board Q&A tomomrrow at Friday, 2PM. Today is the WIT lunch, and Denise announces that next year’s Summit is  November 6 – November 9, 2017.  Denise introduce Rimma Nehme, a Group Product Manager/Architect for Azure Cosmos DB and Azure HDInsight at Microsoft.  Today is going to uncover CosmosDB.  It will be technical!  Let’s go!!

 Do we need another database?

This is the wrong question to ask.  If you look at the rising data problems and needs, most of the production systems today (particularly the ones designed in the 70s and 80s), the modern calls and needs are addressing these problems.  90% of the world’s data was created in the last 2 years alone.  The estimated growth in the next 3-5 years is 50x.  Another trend is global, and another is data is big.  Not just a few TB, but trends of companies processing hundreds of TB to pedabytes.  Every 60 seconds 200 million emails are generated.  Rimma is throwing out ridiculous numbers about the amount of data being generated.  I can’t keep up!

Data is also interconnected.  What you do in Seattle can be connected to another location in the world.  This the butterfly affect.  We are experiencing about 125 exabytes of data (that’s a lot of zeroes).  Companies are looking at ways of extracting that data and monetize that information.  Another trend is the balance of power continues to shift from structured to unstructured data.  About 80% of data originates in unstructured data. Never push the data to computation – push the computation to the data.

When dealing with distributed, you need to deal with a lot of differences.  For example, different architectures.  In 2010 an engineer inside Microsoft observed this and identified that we need a different architectures to deal with these fundamental differences in data at scale.  This is how Project Florence was born, which is the base of what is now CosmosDB.  It was one of the exemplary partnerships between Microsoft Research and the Azure Data team.

At the time they were working to address the problem of the data for large scale applications within Microsoft (e.g. XBox).  They tried the “earthly databases”, building something on their own, and these options weren’t working.  Hence project Florence to meet the internal needs.  A basic set of requirements were laid out:

  1. Turnkey global distribution
  2. Guaranteed low latency at the 99th percentiles, worldwide
  3. Guaranteed HA within region and globally
  4. Guaranteed consistency
  5. Elastically scale throughput and storage, at any time, on demand, and globally
  6. Comprehensive SLAs (availability, latency, throughput, consistency)
  7. Operate at low cost (this is crucial and important! – first customers were Microsoft departments)
  8. Iterate and query without working about schemas and index management (applications evolve over time and rapidly))
  9. Provide a variety of data model and API choices

This manifests into three principals that have evolved

  1. Global distribution from the ground up
  2. fully resource governed stack
  3. Schema-agnostic service

It is very hard to build any service (particularly with such requirements).

If it was easy, everyone would do it (via NASA).  So this is how CosmosDB was built.  This is used internally by Microsoft.  It is one of the fastest services in the cloud.  It is a ring-0 service, meaning it is available in all regions by default.  It is millions of lines of C++ code.  It is 7 years in the making, it is truly not a new services.  Here is what it looks like (a bit of marketing here).

The foundations of the service for a globally distributed, massively scale-able multi–model database service are

  1. comprehensive SLA
  2. five well-defined consistency model
  3. guaranted low latency at t the99th percentile
  4. elasticscale out of storage and throughput
  5. and…

Fine grained multi-tenancy.  This cannot be an after thought.  From left to right, you can take a physical resource like a cluster and dedicate to a single tenant (e.g. customer or database).  You can take an entire machine and dedicate.  You can go another step and take a machine to homogeneous customers.  The final level of granularity is taking that machine and dividing between heterogeneous tenants and providing performance and scalability.

In terms of global distribution, Azure has 42 regions world wife…36 are available, 6 are still being  built out.  You can span your CosmosDB across all of those regions.

Within a database account you have a database.  Within that you have users and permissions.  Within that CosmosDB is a container.  A container of data with a particular data model.  Below that are other user defined code.  The database may span multiple clusters and regions and you can scale it in terms of these containers.  It is designed to scale throughput and storage INDEPENDENTLY.  How is the system designed by the scene (10K foot view)?  Within regions there are data centers, with data centers there are stamps, within that there are fault domains, within that there are containers and within that the replicas.  Within the replicas are the data.  On the database engine this is where the secret sauce comes in – bw-indexes, resource manager, log manager, IO manager, etc.  On any cluster will see thousands or hundreds of tenants.  Need to make sure that none of the tenants are noisy.

Another tenant that is import is the concept of partitioning.  How does CosmosDB solve this?  The tenants create containers of data and behind the scenes these are partitions.  The partitions are comprised are 4 replicas.  This is consistent and reliable.  Each one is a smart construct.  Out of those partitions, you can create partition sets.  These can then span clusters, federations, data centers, regions.  You can overlay topologies to implement solutions that span across multiple regions across the planet.  You need to make sure that the applications then work really well (particularly when merge or split partitions set).  You have the partition which is a data block and then you can build the partition set of various topological.

What are some of the best practices?

  1. Always want to select a partition key that provides even distribution
  2. user location aware partition key for access locally
  3. Select a partition key that can be a transaction scope
  4. Don’t want to use the timestamp for write-heavy workloadso

The resource model summary : Resources, Resource model, partitioning model

Core capabilities Turnkey global distribution – this is adding regions with a click.  Yu can come to an Azure portal, you can see the map of the entire world and pick the regions where you want your data to be.  The data is replicated behind the scenes and then its available for access.  You’re not dealing with VMs, cores.  You can add and remove regions at any time and the application does not need to be re-deployed.  The connection between application and database is logical.  This is enabled by multi-homing capability API.  You can connect to physically to the end point.  Another capability is that you can associate priorities with each of the regions.  If there is an outage or failover in a region, the failover will occur in the order of priority, and that can be changed at any time.  Something added for customers is to simulate a regional outage (but don’t go crazy with this says Rimma!).  You can test HA of the entire application stack.

Another capability is being able to provide geo-fencing.  If you come from any other part of the world there can be regulations where data has to present in particular regions, so if data needs to stay withing a location for requirements, that capability is required.

How does AlwaysOn work?  By virtue of partitioning have multiple locations.  One replica goes down, the application will be unaffected.  If partition goes down, the application will go t partition in another region.  If an entire region goes down, the application will go to another region.  The data is always available.

Another area of interest is active writers and active readers in any region.  Right now turnkey provided at database level, but looking to push this down to the partition key level (a true Active Active topology).  Online backups are available, they are stored in Azure blob storage.  The key capability is that it’s intended for “oops I deleted my data”, it”s’ not for a data center going down (that’s hwy you have multiple regions).

Second capability is elastic scale out.  As data size, scale throughput independently.  Could start out with small amount of data and keep adding more and more.  Back end will seamlessly scale.  Transnational data tends to be small, web and content data is medium sized, and social data/machine generated data is much larger.  As data size grows or throughput grows, scale occurs and this happens seamlessly behind the scenes.  This is done with SSDs behind the scenes.

Resource governance is the next component.  As operations occur, they occur RUs.  You provision RUs that you need (how many transactions/sec to you need?).  All replicas (just a partitioning of data) get a certain budget of RUs.  If you exceed, you’ll get rate limited.  At any time can increase provision throughput.  Can then support more transactions/sec.  Can also decrease at any time.

RU is a read-based currency partitioned at granularity of a 1 second.  It is normalized across DB operations.  Cost the operations via machine learning pipelines that cost queries (e.g. scans, lookups, etc.).  Have run machine learning on models on telemetry data, and then calibrate the cost model accordingly for RUs.  ((DATA DRIVEN).  Back to partitioning model: at any time can change throughput and behind the scenes you can specify the throughput (RUs) you want.  Behind the scenes the re-partitioning will occur, and each one will get more RUs to provide the throughput asked for.  This is where splitting/merging partitions matters, but it happens behind the scenes and you don’t have to worry about it.

What about when you add regions?  You want to add more RUs so you don”t starve existing regions.  Those RUs are spread across all partitions and regions.  Rimma shows how one customer elastically provisioned resources during the holiday season to size up to meet demand, and then size down when no-longer needed.  In a 3 day period, Rimma shows a graph of RUs.  At the top end there are 3 trillion RUs.  (THREE TRILLION IN THREE DAYS PEOPLE)

Various nodes have a various number of RUs serving different workloads, and you can look at the different tenants and partitions in there.  Multi-tenancy and global distribution at that level is incredible.

Another tenant: Guaranteed low-latency at 99%.  This was a core requirement because time is money.  From the business part of view, twice as much revenue lost to slowdowns.  So the system is designed.  At 99th percentile, less than 10ms for the reads measured at 1KB document (which is =80-90% of workload).  At average, will observe lower latency (less than 2 ms for reads and 6ms for writes.  How is this accomplished?  Reads and writes from local region and SSDs done.  The database designed to be write optimized and using latch-free database engine.  All data is indexed by default.  This is a fundamental difference from relational databases, here we have automatically indexed SSD storage.  Customer example: application in California and data in far east.  Added another region and then latency dropped.  Over black Friday/cyber Monday latencies less than 10ms for reads and 15ms for writes.  Azure Cosmos DB allows you to be the speed of light.  If have a database in Chicago and have friends in Paris who want to read your data.  If this  was a central database they would request to read the data from Paris and getting that data from Chicago to Paris takes 80-100 ms.  With CosmosDB you get it in less than 10ms because of those regional locations.

The last here is the consistency model in CosmosDB.  How does it go?  When you are dealing with any distributed system, whether databases or other sytem, typically you are faced with fundamental trade off of latency, availability, consistency and throughput.  If centralized database all requests against primary copy.  With global distribution, get geo–replication get HA and low latency.  But what happens if one replica can’t communicate with others and updates are being made?  What kind of consistency guarantees are made?  This can be a problem!  Do you need to wait for data to be synchronized before you serve it?  Do you want strong consistency or eventual consistency?  Do you want the red pill or blue pill?  With a relational database you get a perfect consistency.  They won’t serve the data until quorum is an agreement. The price there is latency.  On the other hand, the whole movement of no consistency guarantees means low latency.  But real-world consistency is not a binary choice as just described.

What about something in between?  The research literature talks about the wild west of consistency models (not one or the other).  A parper recommended is Replicated Data Consistency Explained Through Baseball by Doug terry, a Microsoft Research individual.  Uses real-world examples from baseball.  Depending on who you are in the game, you might get value out of different consistency models.  The engineers asked the question: can we pick out an intermedicate consistency model and is easy to configure, programmable, presents clear trade-offs?  Most real-life applications don”t fall into those two extremes.  Bounded-stateless, session (monotonic reads and writes local to geo location), and consistent prefix (when updates applied, the prefixes are guaranteed to be consistent).

How is this accomplished?  use TLA+ specifications to specify consistency models.  If you don”t know, check out video by Leslie Lampert who is an important figure in how the system was designed.  Leslie was a Touring award winner for Paxis (sp?) algorithm and founding father of what is used in the stack.

Operationalized the five different consistency models.  Using telemetry to see then how those models are used.  Only 3% use strong consistency, 4% use eventual, and 93% are using the three models in between.  Depending on consistency model specified, might need more computational work, which requires RU.  Have to make trade offs accordingly, and you can monetize this and decide what’s’ best.

Comprehensive SLAs…high availability SLAs are not enough.  Microsoft is in the service business, not just the software business.  Typically services don’t give any SLA, or give for HA.  When tried to approach this problem, asked “What are all the things that developers and customers really care about?”  They care about performance, throughput, consistency, availability and latency.  SO this is the first service in the market that has published comprehensive SLAs that are backed up by Microsoft financially.  Can now see that guaranteed if come in and run workload, will get guaranteed performance, throughput, consistency, availability and latency .  Availability tracked at the tenant and partition level in 5 minute granularity.   Customers can see their run time statistics against their SLA.

Service is also multi-model.  Wanted to enable native integration with different data models.  Behind scenes just ARS model (atom-record-sequence).  All models get translated to ARS model.  Very easy for the service to then on-board other data models now and in the future.  If want document and graph, do not need two copies of data, it can be handled by the same set of data.  This is a powerful combination — to look at data through different lenses.

Why schema agnostic approach?  Modern applications that are built in the cloud are not static.  Can start with one schema, add more tables/new columns…need a robust approach to handle these scenarios.  The object model is schema-free.  The data gets stored as-is.  How do you query this data?  Behind the scenes the data is ARS.  At the global scale, dealing with indexes, schema, etc. is a nonstarter.  In CosmosDB there is schema agnostic indexing.  The indexes are a union of all document trees, and can then consolidate into one and only keep unique values.  All of this structure info is then normalized.  It is an inverted index which gives optimal write performance.  Can identify where documents located and then serve up.  The index overhead in practice is pretty small.  There is a published paper, Schema-Agnostic Indexing with Azure Cosmos DB, go read it!

The Bw-tree in Azure Cosmos DB is highly concurrent, non-blocking, optimized for SSDs.  Avoids random writes.  There are three layers, the b-tree, cache and log structured store (see paper).  Rhema is going faster now. I’m not kidding.  Bw-tree is implemented as delta updates.  There is a mapping table to the updates and updates stored as deltas (sounds like in-memory index structure?).

Rimma shows architecture of Query Processing, there are different semantics but have the same underlying components (compiler, optimizer, etc.).  The engine is very flexible and expect that in the future will host other run-times.  The multi-API approach allows native support for multiple APIs.  If want to store data in cloud but not re-write your app, you can.  There are more APIs coming in the future.  What does this strategy enable?

  • No recompilation needed
  • Better SLAs, lower TCO
  • leverage the existing OSS tool-chain and ecosystem and development IT expertise
  • Life and shift from on-premises to cloud
  • No vendor lock-in
  • Symmetric on-premises and cloud database development

All happy databases are alike, each unhappy database is unhappy in its own way (Kyle Kingsbury via Leo Tolstoy).

How run service?  Weekly deployments worldwide.  16 hours of stress testing every day.  It’s like magic factory across the globe.  Also run variant and invariant checks.  Bots that are fixing nodes that might have issues.  Consistency checking and reporting going over the data continually.

In conclusion…wanted to put herself in our shoes. It’s a lot of information to digest…particularly if not invested in this technology.  Why should you care?  Rimma brings up a quote from Darwin:

It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change.

Can try CosmosDB for free, no need to credit card info, etc.  Childhood dream of going to Cosmos (space) will be fulfilled.

Key points to remember:

  • global distribution, horizontal partitioning and fine grained multi-tenancy cannot be an afterthought
  • schema agnostic database engine design is crucial for a globally distributed database
  • intermediate consistency models are extremely useful
  • globally distributed database must provide comprehensive SLAs beyond just HA

This is a hidden gem, but the bigger message remember the entire NoSQL movement is a counter-culture movement. But the question is how would we build databases if we started today? Without the legacy that we know, would we look at things differently?  Would we focus on limitations or focus on the bigger picture?  Sometimes it is ok to break the rules and try to different things.  Life is too short to build something that nobody wants.  If we focus on real pain points, not the “nice to have things”, but really look at the issues and abandon our constraints and self–imposed limits, we can create a modern SQL.  Rimma ends by thanking Dharma Shukla and entire CosmosDB team.

<collapses>

PASS Summit 2017: Day 1

Hey friends!  After a one-year hiatus I am BACK at the PASS Summit and ready to blog the day 1 keynote 🙂  I will update this post throughout the morning so refresh every so often to see the changes.  You can also follow along on Twitter – check out the #PASSsummit hashtag.

Reminder: My session is today at 1:30 in 6B, Query Store and Automatic Tuning in SQL Server, I hope to see you there!

Today’s keynote is headlined by Rohan Kumar (who I just got to meet thank you Mark Souza) and he’s stated that it will be a lot fun – you can preview what’s coming here.  Rohan is the General Manager Database Systems Engineering for Microsoft, and there are a fair number of demos coming our way.

PASS News

PASS President Adam Jorgensen starts off the day – this is the 19th PASS Summit.  Holy cow.  The PASS keynote is being streamed live via PASStv if you’re not available to be here in person.  If you are at the Summit this week and you have any [problem with your SQL Server implementation that you need answered, go to the Microsoft Clinic.  It is on the 4th floor near the Community Zone, and there are numerous Microsoft Engineers available to help.  It’s an amazing resource at this conference.

Adam takes a moment to thank the individuals that volunteer for PASS – the organization is primarily run by volunteers, and that includes the PASS Board.  The Board will have an open meeting on Friday at 2PM which anyone can attend. I f you have feedback or want to better understand how things work, definitely attend.  Outgoing PASS Board members are Jen Stirrup and Chirs Woodruff.  New elected members are John Martin, Diego Nogare, and Chris Yates.  Adam takes a moment to thank outgoing Past President Tom LaRock and Exec Board member Denise McInerney as their time on the Board comes to a close.

Please take time to meet our sponsors in the Exhibit Hall.  The support of our sponsors makes *so* many things possible not just at Summit, but throughout the year.

And Rohan takes the stage…

SQL Server 2017

Data, AI, and Cloud are three disruptive technology trends…and we need to figure out how to better migrate data to the cloud (I’m asking: how do we make it easier?).

At it’s core, the modern data estate enables simplicity.  It takes in any type of data, and allows a hybrid setup between on-premises and the cloud.  Rohan asks how many people believe they can move their data/solution to the cloud?  About 1% of the attendees raise their hand.  He then asks how many people think that deploying to the cloud or on-prem is what’s needed in the future?  The majority of people raise their hands.

SQL Server 2017 was released October 2, 2017, and SQL Server 2016 was released April 1, 2016…that’s a very fast release cycle for Microsoft, and that’s been possible because of the cloud-first approach, which translates to an increased cadence of SP and CU releases.  Reminder: in SQL Server 2017 there’s a shift to CU releases every month, and no more SPs.  Glenn blogged about this in September.  Rohan brings Bob Ward and Conor Cunningham on stage for the first demo.  They’re wearing Cowboys jerseys.  *sigh*  If you see Bob this week ask him how the Rangers did this year…

Bob and Conor step through a demo showing the performance benefit of a new HPE DL580 Gen 10, using persistent scalable memory NVDIMMs – a query that takes 15 seconds on SSDs takes about 2 seconds on the HP storage.  And it’s cheaper?  I’m deferring to Glenn for the hardware details!!

Bob introduces a “typical” parameter sniffing issue – and then shows how to use Automatic Plan Correction (which relies on Query Store under the covers)…which I’ll be showing today in my session as well!

New features in SQL Server 2017:

  • Advanced Machine Learning with R and Python
  • Support for graph data and queries
  • Native T-SQL scoring
  • Adaptive Query Processing and Automatic Plan Correction

There is much more available in 2017, as noted in the release notes.

Docker Containers

Tobias Ternstrom and Mihaela Blendea take the stage to talk about containers running SQL Server.  Mihaela shows the build definition, which starts a container based on the SQL Server build.  On top of that, restore production database to it and run any additional scripts (e.g. obfuscate and remove some data), then push out the images.  Tobias starts typing in a command line window…this I love.  He knows what he’s doing, but he’s always kind of winging it.  Tobais gives a sneak peak of a tool that shows up as being named Carbon, but Rohan introduces it as Microsoft SQL Operations Studio.  It works on Windows, Linux, and Mac to connect to a SQL Server database.  So at some point SSMS will be deprecated?  Yeah…just like Profiler 😉

Rohan comes back and talks a bit more about the cloud-first approach.  Azure SQL Database is updated regularly, and on a monthly basis new CUs are being pushed out (CU1 for SQL Server 2017 has ALREADY been released).  Multiple terabytes (yes TERABYTES) of telemetry data are captured every day from Azure implementations.  This feedback goes right into making the product better (how else do you think they’re able to release builds and CUs faster?).

Managed Instances

New deployment option in Azure: Managed Instances.  It’s currently in private preview, but you get an entire SQL Server instance with PaaS benefits.  This allows for much more of a lift and shift migration with minimal code changes.  Microsoft is also working on a database migration service – this will not be easy and may not work for every solution, but it’s a step in the direction of making that process better and more reliable.

Working with Large Data/BI Solutions

The next data is showing performance and scale with Azure SQL Database hosted by Danielle Dean, a Principal Data Scientist at Microsoft.  Reading in a lot of data – ingesting patient vitals into Azure database (1.4 million rows/sec via columnstore and in-memory).  Azure Machine Learning Workbench is then used to take an existing model and put it into Azure SQL Database.  Switching to SSMS (it’s not dead yet folks!!) you can query that model (it “looks” like a table), and use a stored procedure to predict against the model.

Scott Currie, the creator of Biml, on stage to talk about using the new Azure Data Factory with Biml.  I’ll admit, this isn’t a technology I know, so I”m just listening at this point 🙂

Azure SQL Data Warehouse Designed from ground up to separate out storage and compute so that you can scale each independently.  This design is very flexible and powerful, and provides significant ability to scale (up to 60 nodes currently), and it’s secure.  Also launched in early in October: Azure SQL Data Warehouse Compute-Optimized Tier.  This was a result of feedback from customers who had some DW queries that were running REALLY slow in Azure.  The solution caches column segments (data) locally, and this cache survives failures, which then provides high performance for DW queries.  Julie Strauss, a Principal Group Program Manager comes on stage to demo this.

Why are these behavioral analytic queries so compute-intensive?  It’s a combination of the data that’s needed and the complexity of the query.  Two kinds of analysis – funnel and cohort.  Both use telemetry from customer interactions/purchases from web site clicks.  The sophistication of the query is taking the vast about of data (100TB) and then fold it many times to create the different cohorts – the query takes about 6 seconds to read through that 100TB of data.  I’d like to know how this is done…

PowerBI quick demo against data with 100+ million rows.  Model built from Visual Studio sourcing data from Azure SQL Data Warehouse – very easy to deploy the model and then generate different visuals in PowerBI (clicky clicky drop was the “official” term used…I’m not kidding).  Ability to also scale in Azure very quickly so only using resources really need (and thus only pay for what need and use).

Ok, there was one more demo but I’ll admit, I’m fading.  🙂

Rohan is wrapping up the keynote and talks about community and all of us working together and lifting each other up.  Rohan gives a shout out to members of the community that have really given a lot back to others.  He also mentioned Denny Cherry, a member of the community who had surgery a couple weeks ago.  I had a recent update from a colleague that Denny is doing well – please send good thoughts his way!

And that’s a wrap.  Off for a day of learning – have a great one friends!