Exception calling “InitializeAsReader” in PowerShell Reading Trace Files

Today I needed to process some results files from Distributed Replay to compare before and after metrics from a synchronized replay that I had a client run for testing a change in hardware.  I’ve done this before using PowerShell to automate things and the script for it is pretty simple:

[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfoExtended") | Out-Null;
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null;
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null;

$InputFile = "E:\Results\27\ResultDir\ReplayResult.trc";
$TableName = "ReplayResults_2"

$conn = New-Object "Microsoft.SqlServer.Management.Common.SqlConnectionInfo";
$conn.ServerName = ".";
$conn.DatabaseName = "ReplayAnalysis"
$InputTrace = New-Object "Microsoft.SqlServer.Management.Trace.Tracefile";
$InputTrace.InitializeAsReader($InputFile);
$TableWriter = New-Object "Microsoft.SqlServer.Management.Trace.TraceTable";
$TableWriter.InitializeAsWriter($InputTrace, $conn, $TableName);

while($TableWriter.Write())
{}

However, when I tried to do this on a new VM that was running Windows Server 2012R2 with SQL Server 2016 using the PowerShell ISE I kept running into the following exception that I had never encountered in the past before:

Exception calling “InitializeAsReader” with “1” argument(s): “Failed to initialize object as reader.”
At line:14 char:1
+ $InputTrace.InitializeAsReader($InputFile);
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : SqlTraceException

I asked about this on Twitter and Shane O’Neill (@SOZDBA) gave me a nudge in the right direction towards finding out the root of the problem by suggesting I look at the error in more detail.  So I wrapped the command in a try/catch block piping the exception to a select *

[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfoExtended") | Out-Null;
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null;
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null;

$InputFile = "E:\Results\27\ResultDir\ReplayResult.trc";
$TableName = "ReplayResults_2"

$conn = New-Object "Microsoft.SqlServer.Management.Common.SqlConnectionInfo";
$conn.ServerName = ".";
$conn.DatabaseName = "ReplayAnalysis"
$InputTrace = New-Object "Microsoft.SqlServer.Management.Trace.Tracefile";
try {
$InputTrace.InitializeAsReader($InputFile);
}
catch
{
$_ | select *
}

Now I had a more detail about the exception being raised:

Exception : System.Management.Automation.MethodInvocationException: Exception calling “InitializeAsReader” with “1” argument(s): “Failed to initialize object as reader.” —>
Microsoft.SqlServer.Management.Trace.SqlTraceException: Failed to initialize object as reader. —> System.IO.FileNotFoundException: Could not load file or assembly
‘file:///C:\Program Files\Microsoft SQL Server\130\Tools\Binn\pfclnt.dll’ or one of its dependencies. The system cannot find the file specified.
at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark&
stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark&
stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm,
Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark)
at System.Reflection.Assembly.LoadFrom(String assemblyFile)
at Microsoft.SqlServer.Management.Trace.TraceUtils.CreateInstance(String assemblyRelativePath, String objectClass)
at Microsoft.SqlServer.Management.Trace.TraceFile.InitializeAsReader(String fileName)
— End of inner exception stack trace —
at Microsoft.SqlServer.Management.Trace.TraceFile.InitializeAsReader(String fileName)
at CallSite.Target(Closure , CallSite , Object , Object )
— End of inner exception stack trace —
at System.Management.Automation.ExceptionHandlingOps.CheckActionPreference(FunctionContext funcContext, Exception exception)
at System.Management.Automation.Interpreter.ActionCallInstruction`2.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)

A quick Google search for pfclnt.dll landed me on a old Connect item that pointed out that the SQL Server Profiler supporting DLL’s for SMO are 32-bit only and to use them in .NET you must compile your assembly for x86.  So the answer was pretty simple, start Windows PowerShell (x86) from the Start menu, paste the same exact script into the ISE window and run it and it works as expected.

Implicit Conversions that cause Index Scans

Implicit conversions seem to be a growing trend in the SQL Server performance tuning work that I’ve been engaged in recently, and I’ve blogged in the past about ways to identify when implicit conversions are occurring using the plan cache.

For those of you who don’t know, implicit conversions occur whenever data with two different data types are being compared, based on the data type precedence.  The precedence establishes the hierarchy of of the types, and lower precedence data types will always be implicitly converted up to the higher precedence type.  These conversions increase CPU usage for the operation, and when the conversion occurs on a table column can also result in an index scan where an index seek would have been possible without the implicit conversion.

For a long time I’ve wanted to map out the most common data types and the effect of a column-side implicit conversion for creating an index seek versus an index scan and recently I finally got around to mapping it all out.

Setting up the tests

To map out the implicit conversion affects I created two databases using different collations, one using SQL_Latin_General_CP1_CI_AS and the other using Latin_General_CI_AS, and then created the following table in each of the databases.

CREATE TABLE dbo.TestImplicitConverts
(
	RowID int NOT NULL IDENTITY (1, 1),
	BigIntCol bigint NOT NULL,
	BitCol bit NOT NULL,
	CharCol char(10) NOT NULL,
	DateCol date NOT NULL,
	DateTimeCol datetime NOT NULL,
	DateTime2Col datetime2(7) NOT NULL,
	DateTimeOffsetCol datetimeoffset(7) NOT NULL,
	DecimalCol decimal(10, 2) NOT NULL,
	FloatCol float(53) NOT NULL,
	IntCol int NOT NULL,
	MoneyCol money NOT NULL,
	NCharCol nchar(10) NOT NULL,
	NumericCol numeric(10, 2) NOT NULL,
	NVarchrCol nvarchar(50) NOT NULL,
	RealCol real NOT NULL,
	SmallDateTimeCol smalldatetime NOT NULL,
	SmallIntCol smallint NOT NULL,
	SmallMoneyCol smallmoney NOT NULL,
	TimeCol time(7) NOT NULL,
	TinyIntCol tinyint NOT NULL,
	GUIDCol uniqueidentifier NOT NULL,
	VarcharCol varchar(50) NOT NULL,
	CONSTRAINT PK_TestImplicitConverts PRIMARY KEY CLUSTERED (RowID)
);
GO

I then created a nonclustered index on each of the columns in the test table so that a single column query filtering on each column could then generate an execution plan with a single index seek when using the matching data type for the filtering.

-- Create nonclustered indexes on all columns to test implicit conversion affects
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_BigIntCol ON dbo.TestImplicitConverts (BigIntCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_BitCol ON dbo.TestImplicitConverts (BitCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_CharCol ON dbo.TestImplicitConverts (CharCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_DateCol ON dbo.TestImplicitConverts (DateCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_DateTime2Col ON dbo.TestImplicitConverts (DateTime2Col);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_DateTimeCol ON dbo.TestImplicitConverts (DateTimeCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_DateTimeOffsetCol ON dbo.TestImplicitConverts (DateTimeOffsetCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_DecimalCol ON dbo.TestImplicitConverts (DecimalCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_FloatCol ON dbo.TestImplicitConverts (FloatCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_GUIDCol ON dbo.TestImplicitConverts (GUIDCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_NVarcharCol ON dbo.TestImplicitConverts (NVarchrCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_RealCol ON dbo.TestImplicitConverts (RealCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_SmallDateTimeCol ON dbo.TestImplicitConverts (SmallDateTimeCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_SmallIntCol ON dbo.TestImplicitConverts (SmallIntCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_IntCol ON dbo.TestImplicitConverts (IntCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_MoneyCol ON dbo.TestImplicitConverts (MoneyCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_NCharCol ON dbo.TestImplicitConverts (NCharCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_NumericCol ON dbo.TestImplicitConverts (NumericCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_SmallMoneyCol ON dbo.TestImplicitConverts (SmallMoneyCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_TimeCol ON dbo.TestImplicitConverts (TimeCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_TinyIntCol ON dbo.TestImplicitConverts (TinyIntCol);
GO
CREATE NONCLUSTERED INDEX IX_TestImplicitConverts_VarcharCol ON dbo.TestImplicitConverts (VarcharCol);
GO

Finally each of the tables was loaded with 467000 rows of random data and the indexes were rebuilt to remove any fragmentation before testing was started.

INSERT INTO dbo.TestImplicitConverts
(	BigIntCol, BitCol, CharCol, DateCol, DateTimeCol, DateTime2Col, DateTimeOffsetCol,
	DecimalCol, FloatCol, IntCol, MoneyCol, NCharCol, NumericCol, NVarchrCol, RealCol,
	SmallDateTimeCol, SmallIntCol, SmallMoneyCol, TimeCol, TinyIntCol, GUIDCol, VarcharCol)
SELECT a.number, a.number%1, CAST(b.name AS CHAR(10)), DATEADD(ms, -1*a.number, GETDATE()), DATEADD(ms, -1*a.number, GETDATE()), DATEADD(ms, -1*a.number, GETDATE()), DATEADD(ms, -1*a.number, GETDATE()),
	a.number, a.number, a.number, a.number, CAST(b.name AS NCHAR(10)), a.number, b.name, a.number,
	DATEADD(ms, -1*a.number, GETDATE()), a.number, a.number, DATEADD(ms, -1*a.number, GETDATE()), a.number%255, NEWID(), b.name
FROM master.dbo.spt_values AS a
CROSS JOIN master.dbo.spt_values AS b
WHERE a.type = N'P'
  AND a.number < 1000
  AND b.name IS NOT NULL;
GO
ALTER INDEX ALL ON TestImplicitConverts REBUILD;
GO

The Test Harness

The test harness first connects to the instance of SQL Server and retrieves the column list from the database for the TestImplicitConverts table and stores them into an ArrayList that is then used to iterate over the tests, generating dynamic statements and at the same time using a different type for each test.  The types are selected from the SqlDbType enumeration in .NET and filter out the types that are not being tested as a part of this investigation.  Each test is executed with SET STATISTICS XML ON to collect the actual execution plan for each of the tests for processing to determine if the implicit conversion caused a scan to occur or not.

#Load the Showplan as a XML document for parsing
$xml = New-Object System.Xml.XmlDocument;
$xml.LoadXml($String);

#Setup the XmlNamespaceManager and add the ShowPlan Namespace to it.
$nsMgr = New-Object 'System.Xml.XmlNamespaceManager' $xml.NameTable;
$nsMgr.AddNamespace("sm", "<a href="http://schemas.microsoft.com/sqlserver/2004/07/showplan&quot;);">http://schemas.microsoft.com/sqlserver/2004/07/showplan");</a>

#Search for a Index Scan or Clustered Index Scan operation that is an Implicit Conversion
$HasImplictConvert = $xml.SelectNodes("//sm:RelOp[@LogicalOp='Index Scan' or @LogicalOp='Clustered Index Scan']/sm:IndexScan/sm:Predicate//sm:Convert[@Implicit='1']/sm:ScalarOperator/sm:Identifier/sm:ColumnReference", $nsMgr).Count -gt 0;

Then the results of each test is output to the host as a comma separated list that can then be placed in Excel and used to generate a chart of the implicit conversion effects.

Note: The full test harness and scripts are attached to this post.

Results

The results of the tests are mapped out in the two charts below.  The green blocks show where an implicit conversion occurs but still results in an index seek operation.  The yellow blocks show where the implicit conversion causes an index scan to occur.  The red blocks show where an operand clash occurs and implicit conversion is not actually supported by SQL Server for the data types, and the grey blocks are the same data type, so no conversion was required.

image

Figure 1 – Implicit Conversions using SQL_Latin_General_CP1_CI_AS

There is a difference between using the SQL_Latin_General_CP1_CI_AS collation and the Latin_General_CI_AS for the char and varchar data types, when converting to nchar or nvarchar.  In the SQL collation the conversion results in an index scan as shown above, but in the Windows collation, the scan does not occur and an index seek is still used to execute the query.

image

Figure 2 – Implicit Conversions using Latin_General_CI_AS

Conclusion

While implicit conversions occur in SQL Server to allow data evaluations against different data types, they can introduce performance problems for specific data type conversions that result in an index scan occurring during the execution.  Good design practices and code reviews can easily prevent implicit conversion issues from ever occurring in your design or workload.

Download scripts here!

The AdventureWorks2008R2 Books Online Random Workload Generator

Over time, I’ve had a number of reasons to need to run a random workload against SQL Server to be able to demonstrate troubleshooting, how SQLOS works, and most recently how to capture a Replay Trace for my series on the Distributed Replay Utility in SQL Server 2012.  For a while I’ve maintained a large workload script that I would run using multiple sqlcmd command line windows to fire off the workload, but one of the problems with this has been that it was incredibly predictable, and it didn’t scale the way I really wanted it to.

When I was working with Distributed Replay, this became somewhat problematic with generating a randomized workload to capture a Replay Trace off of, so I took a few hours and went back to the drawing board with my idea.  What I came up with was a large script file that contains all of the SELECT statement examples from the SQL Server Books Online (http://msdn.microsoft.com/en-us/library/ms187731.aspx).  This script is divided into separate sections using a delimiter, and then I wrote a PowerShell script that reads the file and breaks it down into individual scripts that are randomly executed against the configured SQL Server using SMO.

The two files required to make use of this are attached to this blog post and can be used with minimal modifications against any SQL Server 2008+ system that has the AdventureWorks2008R2 database attached to it.  To make use of the PowerShell script, you will either have to sign it, or if you work like I do in my VMs, allow unsigned script execution with Set-ExecutionPolicy Unrestricted.

The PowerShell script is incredibly simple code wise.  It loads the SMO assembly, splits the file contents on the delimiter, then inside a infinite loop, it picks a random query and executes it against the SQL Server.

# Load the SMO assembly
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo");

# Set the server to run the workload against
$ServerName = "SQL2012-DB1";

# Split the input on the delimeter
$Queries = Get-Content -Delimiter "——" -Path "AdventureWorks BOL Workload.sql"

WHILE(1 -eq 1)
{
    # Pick a Random Query from the input object
    $Query = Get-Random -InputObject $Queries;

    #Get a server object which corresponds to the default instance
    $srv = New-Object -TypeName Microsoft.SqlServer.Management.SMO.Server $ServerName

    # Use the AdventureWorks2008R2 database
    $srv.ConnectionContext.set_DatabaseName("AdventureWorks2008R2")

    # Execute the query with ExecuteNonQuery
    $srv.ConnectionContext.ExecuteNonQuery($Query);

    # Disconnect from the server
    $srv.ConnectionContext.Disconnect();
   
    # Sleep for 100 miliseconds between loops
    Start-Sleep -Milliseconds 100
}

To generate random workloads, I generally fire up 3-5 copies of this script on a client concurrently and leave it running in the background to generate the load. 

AdventureWorks BOL Workload.zip (6.35 kb)