The post New Course: “SQL Server: Transact-SQL Common Table Expressions” appeared first on Joe Sack.
]]>This is a demo-centric, short course on how to create and use common table expressions (CTEs) correctly, for developers and DBAs from SQL Server 2005 onward. Areas I cover include:
Much more to come from the SQLskills team!
The post New Course: “SQL Server: Transact-SQL Common Table Expressions” appeared first on Joe Sack.
]]>The post New Article on SQLPerformance.com “Exploring Low Priority Lock Wait Options in SQL Server 2014 CTP1” appeared first on Joe Sack.
]]>Exploring Low Priority Lock Wait Options in SQL Server 2014 CTP1
In this post I further explore (from the last SQLPerformance.com post) online options, and specifically low priority lock wait improvements being included in SQL Server 2014 CTP1.
The post New Article on SQLPerformance.com “Exploring Low Priority Lock Wait Options in SQL Server 2014 CTP1” appeared first on Joe Sack.
]]>The post New Course: “SQL Server 2012: Transact-SQL Error Handling” appeared first on Joe Sack.
]]>As the title suggests, this course steps through how to write Transact-SQL code that deals with anticipated and unanticipated errors during code execution. The audience is for developers and DBAs – and while the title states it is for SQL Server 2012, several aspects are applicable from SQL Server 2005 and onward.
The post New Course: “SQL Server 2012: Transact-SQL Error Handling” appeared first on Joe Sack.
]]>The post Validating Instance-Level Index View and MERGE Optimization Activity appeared first on Joe Sack.
]]>I won’t rehash what they have collectively already covered thoroughly – but just a quick tip about identifying index view and MERGE optimization activity via the sys.dm_exec_query_optimizer_info DMV… The following query shows counter name and occurrences of optimizations for MERGE statements and indexed views having been matched since the SQL Server instance last restarted:
SELECT [counter],
[occurrence]
FROM sys.[dm_exec_query_optimizer_info]
WHERE counter IN
('merge stmt',
'indexed views matched');
I see this as a “first cut” check – but there are some key limitations to why this would only be a starting data point and not the “end all, be all” approach:
Even with the limitations, if you see non-zero values for the counters, this might accelerate your investigation and application of the appropriate cumulative update. I prefer keeping up with serious issues in this case, but if you need to prioritize what gets patched in larger environments with thousands of SQL Server instances, this may help drive that prioritization.
The post Validating Instance-Level Index View and MERGE Optimization Activity appeared first on Joe Sack.
]]>The post New Course: SQL Server: Transact-SQL Basic Data Modification appeared first on Joe Sack.
]]>My latest course, “SQL Server: Transact-SQL Basic Data Modification”, which is the companion to “SQL Server: Transact-SQL Basic Data Retrieval”, was just published today by Pluralsight.com.
The course description is as follows:
“If you need to modify data in a SQL Server database, then you need to know how to use the INSERT, UPDATE and DELETE statements. This course starts by explaining how to find information about the tables and columns you want to modify. It then explains the INSERT, UPDATE, and DELETE statements in detail along with various methods for limiting the data being modified. Finally it moves beyond the basic modification statements to more advanced topics like the MERGE statement, error handling and more. More then thirty five demos help to give you a thorough understanding of how to perform these essential operations, all using a freely-available demo environment that you’re shown how to set up and configure. This course is perfect for developers who need to modify data in SQL Server databases, from complete beginners through to more experienced developers who can use some of the modules as reference material. The information in the course applies to all versions from SQL Server 2005 onwards.”
I’ll continue doing other T-SQL courses in the future for various levels, starting with the beginner level content as I’ve been doing. My next Pluralsight course will be on a different subject related to SQL Server performance, but I’ll hold off on describing it for now.
The post New Course: SQL Server: Transact-SQL Basic Data Modification appeared first on Joe Sack.
]]>The post New course: “SQL Server: Transact-SQL Basic Data Retrieval” appeared first on Joe Sack.
]]>The general description from Pluralsight of this course is as follows:
“If you need to retrieve data from a SQL Server database then you need to know how to use the SELECT statement. Joe starts this course with the basics of a SELECT statement and its various sub-clauses, and progresses to how to select from multiple data sources in the same statement and a comprehensive section on the functions available to manipulate, aggregate, and convert data during the select operation. More then fifty demos help to give you a thorough understanding of how to perform these essential operations, all using a freely-available demo environment that you’re shown how to set up and configure. This course is perfect for developers who need to query SQL Server databases to retrieve data, from complete beginners through to more experienced developers who can use some of the modules as reference material. The information in the course applies to all versions from SQL Server 2005 onwards.”
As mentioned in the description, this course is for SQL Server 2005 and up – and I do cover new applicable functionality provided in SQL Server 2012 as well.
This was a fun course to work on because normally we teach in the 300 to 400 level. This particular course gave me an opportunity to cover the fundamentals and I’m definitely looking forward to recording more.
The post New course: “SQL Server: Transact-SQL Basic Data Retrieval” appeared first on Joe Sack.
]]>The post Hash Partitioning with SQL Server 2012’s SEQUENCE object and CYCLE argument appeared first on Joe Sack.
]]>I wondered if it could be used in the service of implementing hash partitioning (of sorts) – allowing me to evenly distribute rows across a set number of partitions based on a hash key. In this scenario I want the distribution to be evenly spread out, but NOT partition based on other business keys (like a datetime column or other attribute that has business or application meaning).
So will a column with a sequence default also work as a partition key?
I started off by creating a new table based on AdventureWorkDWDenali’s FactInternetSales table:
— Create demonstration Fact table, no constraints, indexes, keys
— Tested on version 11.0.1750 (SQL Server 2012 RC0)
USE [SequenceDemo];
GO
CREATE TABLE [dbo].[FactInternetSales](
[ProductKey] [int] NOT NULL,
[OrderDateKey] [int] NOT NULL,
[DueDateKey] [int] NOT NULL,
[ShipDateKey] [int] NOT NULL,
[CustomerKey] [int] NOT NULL,
[PromotionKey] [int] NOT NULL,
[CurrencyKey] [int] NOT NULL,
[SalesTerritoryKey] [int] NOT NULL,
[SalesOrderNumber] [nvarchar](20) NOT NULL,
[SalesOrderLineNumber] [tinyint] NOT NULL,
[RevisionNumber] [tinyint] NOT NULL,
[OrderQuantity] [smallint] NOT NULL,
[UnitPrice] [money] NOT NULL,
[ExtendedAmount] [money] NOT NULL,
[UnitPriceDiscountPct] [float] NOT NULL,
[DiscountAmount] [float] NOT NULL,
[ProductStandardCost] [money] NOT NULL,
[TotalProductCost] [money] NOT NULL,
[SalesAmount] [money] NOT NULL,
[TaxAmt] [money] NOT NULL,
[Freight] [money] NOT NULL,
[CarrierTrackingNumber] [nvarchar](25) NULL,
[CustomerPONumber] [nvarchar](25) NULL,
[OrderDate] datetime NULL,
[DueDate] datetime NULL,
[ShipDate] datetime NULL
)ON [PRIMARY];
GO
Next I created the sequence object (increment by 1, with a min of 1, max of 10, caching of 10 at a time and a cycling of values):
CREATE SEQUENCE dbo.Seq_FactInternetSales
AS int
START WITH 1
INCREMENT BY 1
MINVALUE 1
MAXVALUE 10
CYCLE
CACHE 10;
After that, I added a new column to the Fact table called PartitionBucketKey and associated it with the new sequence object:
ALTER TABLE [dbo].[FactInternetSales]
ADD PartitionBucketKey int DEFAULT
(NEXT VALUE FOR dbo.Seq_FactInternetSales);
Next, I created a partition function and scheme:
— Create a new partition function
CREATE PARTITION FUNCTION pfFactInternetSales(int)
AS RANGE LEFT FOR VALUES (1,2,3,4,5,6,7,8,9);
— Create a new partition scheme
— And yes, being lazy about the FGs, as I just want to see whether the
— individual partitions fan-out the way I want…
CREATE PARTITION SCHEME psFactInternetSales
AS PARTITION pfFactInternetSales
ALL TO ( [PRIMARY] );
Next up, I created a clustered index on the table referencing the PK columns used in the original version of this table but then referencing the PartitionBucketKey in partition scheme:
— Create it on the new column referencing the sequence
CREATE CLUSTERED INDEX IX_FactInternetSales
ON dbo.FactInternetSales(SalesOrderNumber, SalesOrderLineNumber)
ON psFactInternetSales(PartitionBucketKey);
It’s show time. Now I went ahead and populated 60,398 rows from the original table. Not much for this test I realize, but this was just an initial proof-of-concept:
INSERT dbo.FactInternetSales
(ProductKey, OrderDateKey, DueDateKey, ShipDateKey, CustomerKey,
PromotionKey, CurrencyKey, SalesTerritoryKey, SalesOrderNumber,
SalesOrderLineNumber, RevisionNumber, OrderQuantity, UnitPrice,
ExtendedAmount, UnitPriceDiscountPct, DiscountAmount, ProductStandardCost,
TotalProductCost, SalesAmount, TaxAmt, Freight, CarrierTrackingNumber,
CustomerPONumber, OrderDate, DueDate, ShipDate)
SELECT ProductKey, OrderDateKey, DueDateKey, ShipDateKey, CustomerKey,
PromotionKey, CurrencyKey, SalesTerritoryKey, SalesOrderNumber,
SalesOrderLineNumber, RevisionNumber, OrderQuantity, UnitPrice,
ExtendedAmount, UnitPriceDiscountPct, DiscountAmount, ProductStandardCost,
TotalProductCost, SalesAmount, TaxAmt, Freight, CarrierTrackingNumber,
CustomerPONumber, OrderDate, DueDate, ShipDate
FROM [AdventureWorksDWDenali].[dbo].[FactInternetSales];
Now I’ll check if the 60,398 rows were divided up evenly over the 10 partitions:
SELECT partition_number, row_count
FROM sys.dm_db_partition_stats
WHERE object_id = object_id(‘[dbo].[FactInternetSales]’);
It worked. And if you look at the individual rows, you’ll see the cycle of sequence values were defined based on the PK composite key (SalesOrderNumber, SalesOrderLineNumber):
SELECT SalesOrderNumber, SalesOrderLineNumber, PartitionBucketKey
FROM [dbo].[FactInternetSales]
ORDER BY SalesOrderNumber, SalesOrderLineNumber;
Okay, so it works. But is this a wise thing to do?
I don’t know yet. I have other questions about this technique and I’d like to do more testing on various scenarios. But I do like the fact that I’m able to leverage a native engine feature in service of another native engine feature. Time will tell if this is a viable pattern or a known anti-pattern.
The post Hash Partitioning with SQL Server 2012’s SEQUENCE object and CYCLE argument appeared first on Joe Sack.
]]>The post Contained DBs and collation conflict appeared first on Joe Sack.
]]>First of all, this demonstration is on SQL Server 11.0.1750 (SQL Server 2012 RC0). I’ll start by executing the following in order to determine the default collation of the instance:
SELECT SERVERPROPERTY(‘Collation’)
This returns SQL_Latin1_General_CP1_CI_AS.
Next I’ll create a database that does NOT allow containment, so you can see the pre-2012 behavior:
CREATE DATABASE [PCDBExample_No_CDB]
CONTAINMENT = NONE
COLLATE French_CS_AI
GO
Notice that in addition to designating CONTAINMENT = NONE, I used a collation that was different from the SQL Server instance default.
And next, I’m going to create two tables – one regular table and one temporary in the newly created database, and then insert identical rows:
USE [PCDBExample_No_CDB]
GO
CREATE TABLE [DemoCollation]
(DemoCollationNM varchar(100))
GO
CREATE TABLE #DemoCollation
(DemoCollationNM varchar(100))
INSERT dbo.DemoCollation
(DemoCollationNM)
VALUES (‘Test Join’)
INSERT #DemoCollation
(DemoCollationNM)
VALUES (‘Test Join’)
Now I’ll execute a query that joins the two tables based on the column name:
SELECT p.DemoCollationNM
FROM dbo.DemoCollation p
INNER JOIN #DemoCollation d ON
p.DemoCollationNM = d.DemoCollationNM
This returns the following error message:
Msg 468, Level 16, State 9, Line 4
Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “French_CS_AI” in the equal to operation.
Next I’ll look at sp_help for each table (regular and temporary):
EXEC sp_help [DemoCollation]
USE tempdb
EXEC sp_help #DemoCollation
As our error suggested, the DemoCollation had a collation of French_CS_AI for the DemoCollationNM varchar data type column, but a SQL_Latin1_General_CP1_CI_AS collation for the DemoCollationNM column in the #DemoCollation table.
Now let’s see what happens for a contained database. In order to demonstrate a partially contained database, I’ll execute sp_configure as follows:
EXEC sp_configure ‘contained database authentication’, 1
RECONFIGURE
The following code creates a partially contained database and sets up the same test (different database):
CREATE DATABASE [PCDBExample_CDB]
CONTAINMENT = PARTIAL
COLLATE French_CS_AI
GO
USE [PCDBExample_CDB]
GO
CREATE TABLE [DemoCollation]
(DemoCollationNM varchar(100))
GO
CREATE TABLE #DemoCollation2
(DemoCollationNM varchar(100))
INSERT dbo.DemoCollation
(DemoCollationNM)
VALUES (‘Test Join’)
INSERT #DemoCollation2
(DemoCollationNM)
VALUES (‘Test Join’)
Now I’ll test the join:
SELECT p.DemoCollationNM
FROM dbo.DemoCollation p
INNER JOIN #DemoCollation2 d ON
p.DemoCollationNM = d.DemoCollationNM
This time I get results instead of a collation error:
If I execute sp_help for #DemoCollation2 in tempdb, I also see that the Collation is French_CS_AI. So the containment setting changed the default collation to the user-database’s default instead of the SQL Server instance level default.
The post Contained DBs and collation conflict appeared first on Joe Sack.
]]>