Creating a new Login on Multiple Servers with the same SID using Powershell

Steve Jones (Blog|Twitter) asked a question on the #sqlhelp hash tag tonight that I found interesting so I thought I’d play around with Powershell for a few minutes and see if I could beat Aaron Nelson (Blog|Twitter) to the punch with a Powershell answer for once.  Steve’s question was:

@wayoutwest: “Anyone have a PSH script that can add a login, run sp_helprevlogin, pipe the output as a new batch to a 2nd server? #sqlhelp #powershell”

Initially I thought, well that should be easy enough, but then I realized, if you are creating a new login, you don’t really need sp_helprevlogin to create the login on the second server, you just need the new login SID from the first server.  We already have the new login name and password, the only thing sp_helprevlogin gives us that we don’t already have in this case is the SID for the new login.  Ostensibly, the intent behind Steve’s request is to have the login created on both servers with the same SID so that a restore of the database from the first server to the second server doesn’t result in orphaned database users.  A good place for this would be database mirroring where the logins are created on the principle server and the mirror needs an identical login with the same SID for the application to work correctly after a failover occurs.

We can leverage SMO to do this with Powershell by loading the Microsoft.SqlServer.SMO assembly into our environment. 

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

Then we create and set a number of strings for the principle and mirror server names, and the new login’s name and password. 

[string]$PrincipleServer = ".\2008R2Lab1";
[string]$MirrorServer = ".\2008R2Lab2";
[string]$LoginName = "TestLogin";
[string]$Password = "pass@word1";

Then we create a Server object to both of the SQL instances:

# Create SMO Connections to both SQL Instances
$PriSvr = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $PrincipleServer
$MirSvr = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $MirrorServer

For development and testing purposes we can check the principle server to determine if the login already exists and if so drop it before we create it:

# Check if the login exists on the Principle server and Drop if it does
if ($PriSvr.Logins.Contains($LoginName))
{ $PriSvr.Logins[$LoginName].Drop(); }

To create the new login, we create a new Login object passing the Server object for the principle server and the new logins name into the object constructor.  Then we set the LoginType for the new login object to be a SQL Login and finally we call Create() on the object and pass in the password for the new login to be created.

# Create a new login for the Principle Server
$NewLogin = New-Object ('Microsoft.SqlServer.Management.Smo.Login') $PriSvr, $LoginName
# Specify that this is a SQL Login
$NewLogin.LoginType = [Microsoft.SqlServer.Management.Smo.LoginType]::SqlLogin;
# Create the login on the Principle Server
$NewLogin.Create($Password);

After creating our login, we will need to refresh the Logins Collection on the principle server to get the new login back with its SID, and then we can grab the login into a local variable for easier access later.

# Refresh the login collection to get the login back with SID
$PriSvr.Logins.Refresh();
# Get a Login object for the Principle Server Login we just created
$PriLogin = $PriSvr.Logins[$LoginName]

Same as before, on the Mirror Server we can check if the new login exists and drop it if it does for development and testing purposes:

# Check if the login exists on the Mirror server and Drop if it does
if ($MirSvr.Logins.Contains($LoginName))
{    $MirSvr.Logins[$LoginName].Drop(); }

To create the new login on the mirror server, we follow all the same steps as we did on the principle, except we add one additional step to call set_SID() and provide the SID from the login on the principle server, in this case using the $PriLogin variable we stored the login in for easier access.  Once we set the SID we can call Create() for the login and pass in the password to create the login with.

# Create a new login for the Mirror Server
$NewLogin = New-Object ('Microsoft.SqlServer.Management.Smo.Login') $MirSvr, $LoginName
# Specify that this is a SQL Login
$NewLogin.LoginType = [Microsoft.SqlServer.Management.Smo.LoginType]::SqlLogin;
# Assign the SID to this login from the Principle Server Login
$NewLogin.set_Sid($PriLogin.get_Sid());
# Create the Login on the Mirror Server
$NewLogin.Create($Password);

To validate that this all worked as we expected it to, we can open up SSMS and create a new query window that uses SQLCMD mode, allowing us to connect to both instances and query the sys.server_principals DMV for the TestLogin login to validate we have the same SID.

:CONNECT .\Lab1
SELECT *
FROM sys.server_principals
WHERE name = 'TestLogin';
GO
:CONNECT .\Lab2
SELECT *
FROM sys.server_principals
WHERE name = 'TestLogin';
GO

image

As you can see in the above image, the SID’s are identical which means that the database users will properly map to their server login in the event of a restore from one instance to another, or a database failover when using database mirroring.

The complete script for this is attached to this blog post.

Create Mirrored Login.ps1 (2.13 kb)

PASS Summit 2011 Sessions

Yesterday PASS announced the PASS Summit 2011 regular sessions and 4 new half-day sessions.  At the same time they also announced the top sessions from the SQLRally conference in Orlando, FL last month. I was thrilled to have been in the Top 5 for the conference overall. A few weeks ago PASS announced the Pre-conference Sessions and Spotlight Sessions.  This year I’ll be presenting a full day Pre-conference session on Extended Events, as well as a Spotlight Session on Event Notifications.  This will be the fourth time I have attended PASS Summit, and my third time presenting.

Below is the abstract for the Pre-conference session on Extended Events.

Extended Events Deep Dive [400]
Session Category: Pre-conference Session (7 hours)
Session Track: Enterprise Database Administration and Deployment
Speaker(s): Jonathan Kehayias

At PASS 2010 after the public release of CTP1 of SQL Server Denali, the product team demonstrated a replacement for SQL Profiler built on top of Extended Events. The writing is on the wall for SQL Trace and the future of diagnostic profiling in SQL Server is Extended Events. In this workshop you will learn Extended Events from the ground up; from the original implementation in SQL Server 2008 to the enhancements that have been made in SQL Server Denali that redefine data collection for troubleshooting in SQL Server.

Extended Events provide more information about the operations of SQL Server than have ever before been available in the product. Unfortunately, there is a significant learning gap between Extended Events and SQL Trace; and the lack of a UI has only made the learning curve that much steeper. This full-day deep dive workshop will explain the basic concepts of extended events, spin that information into some basic patterns, and build on those patterns to create complex custom functionality that will prepare you for the future. Learn firsthand from the developer of the SSMS Addin for Extended Events in SQL Server 2008 how to leverage Extended Events in your own environment.

I really look forward to this session based on the feedback I have gotten on the Extended Events sessions I’ve done the last two years at PASS.  One of the primary feedback items has been that the session was rushed and didn’t have enough time to cover the subject matter. Extended Events are such a drastic change in profiling system events from SQL Trace that there is a large learning curve that has to be made before it is possible to really begin leveraging the feature for diagnostics, and trying to cover the necessary background information in a 75 minute session is challenging at best.

The Event Notifications session is a updated version of a session I presented at SQLSaturday in Tampa, FL two years ago.  The session abstract is below.

Using Event Notifications in SQL Server 2005/2008 [300]
Session Category: Spotlight Session (90 minutes, Invitation only)
Session Track: Enterprise Database Administration and Deployment
Speaker(s): Jonathan Kehayias

Event Notifications are a powerful tool in the Database Administrators tool kit that are often overlooked and rarely used. This session will provide an overview of the Service Broker components used by Event Notifications and the difference between Event Notifications and other features of SQL Server like SQL Trace and DDL Triggers. It will look at the events that are available for use with Event Notifications and how to find information about the data returned by those events in the Books Online. The session demos will teach you how to leverage the functionality of Event Notifications to automate responses to events inside of SQL Server, and how to build a monitoring solution for problems like blocking and deadlocks.

This session was very popular the first time I presented it, and includes a number of functional demo’s that can be applied to any environment to immediately begin automating responses to events inside of SQL Server.

If you are attending PASS Summit 2011 this year, I hope to see you in one of my sessions.

SQL Server and Soft NUMA

Friday of last week, Steve Jones (Blog|Twitter) asked a question on the Twitter #sqlhelp tag about SQL Server and Soft NUMA that prompted me to write the long waiting follow up to my first post Understanding Non-Uniform Memory Access/Architectures (NUMA). The question that Steve asked was:

@way0utwest: “Do you always have one soft NUMA node? Say a one CPU PC or SMP architecture? #sqlhelp”

The whole start of my real work looking into the different NUMA configurations was actually around how to configure SQL Server for Soft NUMA so that I would see the benefits that are mentioned in two of the topics in the Books Online. The Understanding Non-uniform Memory Access topic states (or at least at the time I wrote this it stated):

"The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance."

I remembered reading something along these lines after recently looking at a number of client servers that had 24 cores and up to 128GB RAM installed in them but were not hardware NUMA configurations. Having the extra lazy writer threads sounded like a good thing so I went about trying to figure out how to setup Soft NUMA which lead me to the BOL Topic How to: Configure SQL Server to Use Soft-NUMA which provided the following scenario:

“Instance A, which experiences significant I/O, now has two I/O threads and two lazy writer threads, while instance B, which performs processor-intensive operations, has only one I/O thread and one lazy writer thread. "

Perfect this is exactly what I was thinking, so now off to do some testing on one of my VM’s, and this is where I hit my first bump. I needed to figure out the appropriate CPUMask values for a 24 core system to subdivide it into four Soft NUMA nodes of 6 cores each, but the Books Online, and the available blog posts for the topic don’t make that very easy to accomplish and they don’t explain the basis of the CPUMask very well either. I was able to figure out with some testing, that the CPUMask is just a bitmask of the CPUMask values of the CPU’s that contribute to Soft NUMA node. Clear as mud right? 

To make this a bit easier, I am going to paste some tables that I created in Excel below that demonstrate what I am talking about, but for a 8 core example and not the 24 core solution I was originally working on. The first thing you have to keep in mind is that CPU’s are numbered with a CPUID starting with a base of zero, and each CPU has its own mask value that is two to the power of the CPUID. For the example in the How to: Configure SQL Server to Use Soft-NUMA Books Online Topic, this works out logically to the table below:

CPUIDMaskSoft NodeCPU BitMaskNode Registry CPUMask
01130x03
12
242120x0C
38
41632400xF0
532
664
7128

The CPU Mask Sum column is the sum of the Masks for each of the CPU’s in the Soft Node and then the Node Registry CPUMask is the hexadecimal representation of the decimal value. This can be obtained using the DEC2HEX() function in Excel, or by placing the Windows Calculator into Programmer Mode from the File Menu.

image  image

So now that we have an easier to understand example covered, here is what the CPUMask for a 24 core server with four 6 core Soft NUMA Nodes would be:

CPUIDMaskSoft NodeCPU BitMaskNode Registry CPUMask
011630x3F
12
24
38
416
532
664240320xFC0
7128
8256
9512
101024
112048
12409632580480x3F000
138192
1416384
1532768
1665536
17131072
182621444165150720xFC0000
19524288
201048576
212097152
224194304
238388608

To test this configuration out, I went to one of my SQL Server 2008R2 VM’s on my laptop and made use of the –Pn startup trace flag which tells SQL Server to create n schedulers during the SQLOS boot up even if the actual hardware does not exist for those schedulers.  This is an undocumented startup parameter and it should NEVER BE USED in a production server.  Just because the SQLOS creates additional schedulers, doesn’t mean that you physically have additional processors in the server, and if you want to see how bad this startup parameter can cause performance to drag, create 24 schedulers on your laptop and then run a couple of concurrent queries in SQL Server.  After adding ;-P24 to startup parameters and restarting my SQL instance I now have 24 schedulers in sys.dm_os_schedulers as shown below:

image

Now all I have to do is add the registry keys to create my Soft NUMA nodes. At this point I realized that I had really complicated figuring out the hexadecimal CPUMask values. The DWORD registry value accepts the Decimal or Hexadecimal value in regedit, so you can skip converting the CPU BitMasks to hexadecimal, if you ever actually configure Soft NUMA for one of your servers.

image

After setting all of the registry keys for my instance, and then restarting my instance I went back in to sys.dm_os_nodes to make sure that my NUMA configuration changes were correctly made.

image

Everything looks perfect, I now have four Soft NUMA nodes with 6 schedulers in each node, so lets go look at our new Lazy Writer Threads in sys.dm_exec_requests:

image

This is only a subset of the system sessions that actually exist on the instance but trust me when I tell you, all of the Lazy Writer threads are shown above; there is only one.  So what happened here? I have four Soft NUMA nodes, we confirmed that so why don’t I have any additional Lazy Writer threads?  I was certain I had done something wrong here, so I sent tweet to Amit Banerjee (Blog|Twitter) a good friend of mine from the MSDN Forums who happens to be on the Product Support Team in India. It turns out that the Books Online is wrong about the benefits of using Soft NUMA, and this was pointed out a few years ago by Bob Dorr, another escalation engineer in Product Support on his blog post How It Works: Soft NUMA, I/O Completion Thread, Lazy Writer Workers and Memory Nodes. Two key points are made by Bob in this blog post; first that Lazy Writers are only assigned to Hard NUMA nodes, which map to SQLOS memory nodes, and second the IO Completion threads that are created per Soft NUMA node have nothing to do with Disk I/O, but are instead for connection requests and TDS traffic over the network.

Based on this, I filed a connect feedback for the incorrect entries in the Books Online for Soft NUMA that will eventually see this incorrect information corrected. At the point that I realized this, I had just over four hours of time dedicated to figuring out how to make something work that will never actually do what I expected based on the documentation, but I had a much better understanding of how NUMA actually worked inside of SQL Server, which is something that had up to this point been a big mystery to me.

This leaves open the question, when would you actually make use of Soft NUMA in SQL Server, if it doesn’t actually provide CPU to Memory masking like Hard NUMA does, or provide any benefits of having additional Lazy Writer Threads?  One use case would be to provide port affinity to drive specific connections to a specific node as discussed in the How It Works: SQL Server 2005 NUMA Basics blog post, also by Bob Dorr, but this only applies to serial execution plans and not to parallel execution plans as detailed by Bob in the NUMA Connection Affinity and Parallel Queries blog post. An example of where Soft NUMA usage is the ETL World Record set by SQL Server back in February 2008 which used Soft NUMA to balance the connection load as a part of achieving the world record of 1.18TB of flat file data loaded in 1794 seconds.