Costing and Statistics, continued…

I started off tonight playing with the new page compression feature.  So far I like it.  I haven’t found something yet about which I want to post (which is code for “I’m still looking for the seams ;)”), but I have some other things you can try to learn a few things about how the SQL Server QP makes assumptions about various kinds of predicates in queries during cardinality estimations.

So, you may or may not know much about how the SQL Server QP figures out what plans to run.  For the uninitiated, it almost seems like some form of magic.  In some ways, it is – it’s very powerful and poorly understood by many, and it usually requires very little effort by someone skilled in the area to make something amazing happen.  SQL Server merely needs to make itself sparkle when I fix a query plan and I’m set for life :).

In reality, SQL Server uses a cost-based optimizer, which means that it keeps track of all sorts of interesting statistical information, row counts, page counts, etc.  It uses all of these in formulas to come up with numbers for each plan fragment and then it weighs the relative costs of all of these to pick a plan that has the “least cost”.  That sounds nice and absolute until you get to go actually try to make that work, and then you are left with all sorts of nasty questions like:

* What should the cost formulas be?
* Do the numbers need to differ based on the customer’s hardware? How do we calibrate all of this stuff?  What do we do as machines get faster?
* How do I estimate how many rows are going to come back from one predicate in my WHERE clause or join in time less than running the query to figure it out?
* Same question when I have a bunch of preciates?

Eventually, the QP has to make a set of assumptions so that they can come up with a plan in a reasonable amount of time, both because customers don’t like things to ever take time and because managers don’t like customers to tell them about how much time something should take..  One assumption might be that, data is uniformly distributed over a data type’s possible values when you don’t have any better information.  This can help make it possible to come up with solutions that work well most of the time.  The problem is that estimates can be wrong, and that can cause the QP to come up with a different plan than had it had correct information to use when selecting the plan.

So, I’ll show you an example here.  To be clear, I’m not saying that this is something that is “broken”.  This just exposes a place where 2 different assumptions rub up against each other in a way that will SEEM odd to the outside observer.  When you consider the average customer use cases, these assumptions are not bad and work very well the vast majority of the time…

To the example:

drop table comp1
create table comp1(col1 int identity, col3 nvarchar(3000))

declare @i int 
set @i=0
while @i < 70000
begin
insert into comp1 (col3) values (N'123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890')
set @i=@i+1
end

So I create a table with the same long string in it 70,000 times. 

Then I run a query with a where clause just to get some statistics created:

select  * from comp1 where col3 like '%4567890%'
dbcc show_statistics ('comp1', col3)

Once we have all of this stuff, we can look at the estimates for two very similar queries:

select  * from comp1 where col3 like '123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890%'

select  * from comp1 where col3 like '%123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890%'

(one is LIKE ‘abc%’.  The other is LIKE ‘%abc% where abc is the value we have inserted 70,000 times).

So, both queries will return 70,000 rows. 

Well, the abc% pattern query estimates 70,000 rows (good!).  The second query estimates 69.6985 rows.  that’s a bit under ;).

Let’s talk about this a bit more so you can understand why.  In the first query, there is an exact string match against a column represented in the statistics histogram.  So, the likely outcome is to take that cardinality count to determine the number of rows that will likely be returned from the query.  In this case, we expect all rows to come back.

In the second one, there is no mechanism to estimate the cardinality of ANY string of this size (SQL Server does have a feature that does smaller strings which is exposed as the “STRING INDEX” in the histogram, but you can’t see the details of this object in 2005 and I haven’t seen that change in 2008 either).  So, for really large strings, it is left with… guessing.

So, that 69.6985 number is an estimate that is partially based on the length of the string.  Now, the QP could try to walk through each statitistics object and try to find substrings against any existing piece of statistical data and then try to adjust its estimate.  In practice, though, the cost of doing that is expensive.  The various statistics objects are run at different times and have different sample rates, so even then they will vary somewhat.  Finally, for most cases it may just not impact the plan choice that much.  Odds are, though, that this will bite at least one of my readers at some point.  So, this is good to know – it can help you find that spot where the assumption in the QP is causing your query plan to be wrong.  This is the sort of case where you will want to consider a query hint to help the QP out.

There are more assumptions (and seams between them) in the cardinality estimation code.  I’ll let you guys go hunt for them a bit to find them.

Happy Querying!
Conor Cunningham

Statistics, Damned Lies, and Statistics – What is Statman?

One of the areas I managed in SQL Server had to do with the code that automatically builds statistics and uses them during query optimization to come up with a good plan.  Today I’m going to talk a bit about how statistics are built and how this works with parallelism and partitioning.

First things first.  There are two ways in which statistics are created in SQL Server:
1. You create an index.
2. You run a query that needs statistics when they do not exist and the server has “auto-create statistics” enabled.  (That’s the simple definition – there are actually caveats I am skipping)

Why does creating an index also create statistics?  Well, the main cost in creating statistics is reading all the pages into memory.  The thought is that if you want to create an index, you have already paid the expensive cost and the server might as well go ahead and create the statistics object for you instead of re-reading those pages later.  So, you get these “for free” (well, almost free).

When you run a query that tries to perform an operation where cardinality estimates would be improved by having statistical information about a column, the server can try to create statistics during the compilation process.  Since compilation time needs to be kept to a minimum, these are usually done over a _sample_ of the pages to avoid making the compilation of a simple query as expensive as an index build.

So the basic plan to create an index is a singlethreaded plan that is something like this:

INSERT (New B-Tree)
  |
Sort
  |
SCAN(Heap or some index)

So during this plan’s execution, there is an implicit side-effect to also create this statistics object.

For auto-stats, there is a separate query that is run.  The syntax is not public, but you can see artifacts of this if you look at the profiler and you’ve turned on the various auto-stats and plan outputs:

drop table stat1
create table stat1(col1 int, col2 int, col3 binary(6000))
declare @i int
set @i=0
while @i < 1000
begin
insert into stat1(col1, col2) values (rand()*1000, rand()*1000)
set @i=@i+1
end


select * from stat1 where col2 > 5

Execution Tree
————–
Stream Aggregate(DEFINE:([Expr1004]=STATMAN([t1].[dbo].[stat1].[col2])))
  |–Sort(ORDER BY:([t1].[dbo].[stat1].[col2] ASC))
       |–Table Scan(OBJECT:([t1].[dbo].[stat1]))

What is this?  Well, this is the plan that is used to generate the statistics object.  It scans a table, sorts it (into ascending order), and then feeds it into this magical thing called statman.

Now, the details of statman are undocumented, but you can infer that it is a special internal aggregate function that is being run inside of a group by operation (stream aggregate in the plan).  This means that it is collapsing all the rows into some BLOB and then it does something with this.

Next time I hope to talk about parallel statistics build.

Happy querying!

Conor Cunningham

Slow Outlook 2007 on Vista (x64)

I’ve returned from a small trip and I will be preparing my next SQL post soon.

I’ve been struggling with slow POP3 sync behavior on my Outlook 2007/Vista 64 box, and I finally found a hammer to beat it back into submission.

The problem – I sync manually, and when I do the application becomes basically so slow as to be non-responsive.  I deleted a bunch of mail to get my mailbox down to a reasonable size (150MB) – still there.  I deleted RSS feeds, blaming XML for my woes again :).  That didn’t work either.

I eventually found this:
http://support.microsoft.com/default.aspx?scid=kb;EN-US;935400

Something to do with the new Vista TCP window size algorithm not working well with “legacy” hardware.  No only is the download slow, but using Outlook becomes unbearable as well, so I can’t really do anything with the application…  I didn’t get timeout errors (at least not frequently), but I certainly had bad feelings about my email experience.

When I disabled the new TCP window behavior, all went back to what was expected…. Now I have to go find all those RSS feeds again :).  So here’s to “netsh interface tcp set global
autotuninglevel=disabled”.  It worked for me, at least so far.

I think that this is a case where Microsoft has an opportunity to compare their offering to gmail and determine “hey, I wonder why people think that a webui is better – they are _so_ much slower….” well, in some cases the thick client is actually slower, and that doesn’t make MS look very good.  So I hope this workaround avoids frustration for others :).

I’ll also hope that MS could add something to outlook in the next service pack when it realizes that it downloads 15KB in 2
minutes from my POP3 servers.  Perhaps they can add a popup or special
error in the next service pack of outlook 2007 to point people in the
right direction.

My setup:
Vista x64 SP1
netgear gigabit 8 port switch
linksys wrt54g NAT
some no-name cable modem that my cable company gave to me.

How to write non-JOIN WHERE clauses

Based on my previous post describing the differences between ANSI-89 and ANSI-92+ join syntaxes and recommendations, I had a follow-up question in a comment which was (paraphrased)

What do I do with non-join WHERE clauses – how should I write those?

Example:

SELECT p.FirstName + ' works in ' + d.DepartmentName AS 'Who Works Where'
FROM Person AS p
JOIN Department AS d
ON p.DepartmentID = d.DepartmentID
  AND p.JobType = 'Salaried'

(so, the question is about the filter condition on p.JobType).

Answer: it doesn’t generally matter, but I’d recommend that you put it in the WHERE clause for readability.

I’ll explain the why now.  A little background first:

In SQL Server (actually all QPs of any note), the SQL string is converted into an operator tree and then there’s a lot of magic to rewrite that query tree into alternative forms of the query.  While some people think of this as “if I try different syntaxes, the query may execute faster”, the optimizer is actually doing something at a much deeper level – it is essentially rewriting the tree using a series of rules about equivalences.  It’s a lot more like doing a math proof than trying different syntaxes – it deals with associativity, commutivity, etc. (Conor bats cobwebs out of people’s heads- yes, you guys did this stuff in school).  It’s a big set theory algebra machine.  So, you put in a tree repesenting the syntax at one end and get a query plan out the other side.

So, I’ll ask the question from a slightly different perspective and answer that too to help explain the “why”:

“Does putting filters in the join condition of an inner join impact the plan choice from the optimizer?”.

Answer: no – at least not in most cases. 

When the SQL Server Optimizer starts working on this query, it will do
a number of things very early in the optimization process. One thing is called “simplification”, and most of you can guess what will happen there.  One core task in simplication is “predicate pushdown”, where the query is rewritten to push the filter conditions in WHERE clauses towards the tables on which the predicates are defined.  This mostly enables index matching later in optimization.  It also enables computed column matching.

So, these predicates are pushed down in both cases.  You lose a lot in query readability by trying this form of rewrite for very little gain.

There is one case where I’d consider doing this, but it really requires that you have uber knowledge of the QP.  However, this seems like a good challenge, so I’ll explain the situation and let you guys write in if you can find an example of it:

You know that the QP uses relational algebra equivalence rules to rewrite a query tree (so A join B is equivalent to B join A, filter .a(a join b) == (select * from a where filter.a) join b, etc.

One could imagine that some of the fancier operators may not fit as easily into the relational algebra rewrite rules.  (Or, they are just so complex that the cost of trying such rewrites outweighs the benefit).

Can you find operators where (filter (OPERATOR (SCAN TABLE)) is not equivalent to (OPERATOR (filter (SCAN TABLE))? 

Obviously inner join is a bad place to start.  I’ll throw out some not-so-random areas for you to try:
* updates
* xml column manipulations
* SELECT list items on objects that change semantics based on how many times they are executed in a query (rand()?  think functions)
* play with group by (this one is tricky)
* OVER clause?
* UNION/UNION ALL, INTERSECT, …

So, there are some cases where the QP will do these rewrites, and there are some places where it can’t/won’t (or at least doesn’t do it always).  In a few of these cases, the intent of the query can be preserved by manually rewriting the query to push the predicate “down” the query tree towards the source tables.  However, I would not recommend this unless you really know what you are doing – the query rewrite needs to be equivalent or else you may not get the right results back from your query!

Bottom line – I think that the query is far more readable with non-join predicates in the WHERE clause.  Whenever I try to optimize queries, I usually push them into this format so that I can wrap my head around what the query is trying to accomplish.

Happy querying!

Conor

ON vs. WHERE – where should you put join conditions?

I had a request from a reader that I’ll answer today about when to do
joins in the ON clause and when to do them in the WHERE clause.  For
example:

SELECT * FROM A, B WHERE A.a = B.b

vs.

SELECT * FROM A INNER JOIN B ON (A.a = B.b)

The short answer is that both are the same (at least for inner joins), but I prefer and encourage you to use the latter format (and I will explain why).

Earlier versions of ANSI SQL did not contain the ON clause for join conditions – it used the where clause for everything.  This was fine for inner joins, but as database applications started using outer joins, there were problems that arose with this approach.  Some of you may remember the original ANSI-89-era syntax for *= and =*.  These were used on the predicates to define the behavior for an outer join.  In this case, we’ll preserve non-matching rows from A in addition to the normal rows returned from a join:

SELECT * FROM A, B where A.a *= B.b
which is equvalent to:
SELECT * FROM A LEFT OUTER JOIN B on (A.a = B.b)

This “hack” worked fine until people started using multiple predicates for things and also started doing multiple outer joins in one query.  Then we were left with a big, ambiguous mess about which *=, =* applied to which join.  So, ANSI banished *= and =* and SQL Server has been threatening to follow for quite sometime.  I honestly never use the old-style join syntax, so I don’t even recall the exact deprecation state.  It is dead to me already ;).

The broader concept is that predicates are “attached” to joins using the ON clause.  This is very helpful when you are trying to figure out what should happen in a query.  It helps semantically define the set of rows that should return from the join.

So, if I start nesting various inner and outer joins in a big, nasty query, all of a sudden it is very nice to have an ON clause to define what should go where.

SELECT * FROM A INNER JOIN (SELECT B.* FROM B LEFT OUTER JOIN C ON (B.col1=C.col1 and B.foo.C.bar)) AS I1 ON A.col1=I1.col1;

As applications get more complex, it is not uncommon to have 10s of tables in a query.

Internal to the SQL Server query processor (actually pretty much all query processors), there is a tree format for the query operations.  The exact representation will vary from vendor to vendor, but each one of these SQL syntax pieces transates to some set of relational operators in this query tree.  Putting your query syntactically into this format gets things much closer to the internal algebra of the query processor in addition to making things easier to read as queries get more complex.

Actually, if I were to go build my own QP, I’d seriously consider adding a query tree mechanism in addition to SQL (this concept is not new and is not mine).  OLEDB had a concept like this in the earlier public betas, for example.  Obviously the implementor would want to retain the ability to change the internal implementation, but a tree of commands is actually far easier to grok than SQL, once you get used to the idea.  Other technologies expose a graph structure to you (video codecs/transforms in windows, msbuild is an XML file representing a tree, etc).  SQL as a textual language exists historically.  It’s also a nice way to write queries :).

The only other area where I get concerned is when people turn off ANSI_NULLs.  It is one of those historical features that should basically never be used.  I could imagine cases where some comparisons in joins behave differently in the ON clause vs. afterwards in an WHERE clause.  I don’t want to pollute people’s minds, as my attempts to go back and re-learn the quirks on this for this post left me baffled since NULL=NULL returns TRUE only for some syntax constructs.  So, I don’t have a case where it is broken, but I’ll leave you with the “ANSI_NULLs off is bad” message and list it as a potential reason.

Will you get wrong results if you use the old-style join syntax?  no.  The world will still turn.  So, this is really a recommendation based on style and sanity.  I would recommend that you get used to the newer style – it may help you write more powerful applications and think more like the QP.  For some applications, this might let you write more powerful features for your users.

Thanks,

Conor Cunningham

Local-Global Aggregation

I don’t know about you, but groupby is one of my favorite operators.  There are a TON of interesting optimizations that a QP considers when you start adding group by into queries, especially when you have joins and then a group by.  TPC-H benchmark wars among the large database vendors are won and lost on many such optimizations.

So, if you are doing relational OLAP (ROLAP) or are otherwise running group by queries over lots of well-normalized data, then I suggest you brush up a little on your knowledge of group by – it will help you understand when queries are behaving and when they are not.

Here’s the paper I tell people to read on the subject.  It’s written for the person implementing a database, but anyone who can read query plans should get the basics from the paper.  It is the basis for most of the hard-core optimizations (and tricky problems) that face all query processors today. 

The basic idea is caused by understanding that an aggregate function can be split into multiple operations and done in parts.  Some parts can be done earlier in a query, saving a lot of work.  If these results can be combined later, you might be able to speed up a query by computing these partial results early and then combining them at the end of the query.  The usual savings is that you don’t have to materialize the results of a join when you only care about the aggregate over some piece of it.

Most of the core aggregate functions defined in SQL can be decomposed.  If you have a set of rows {x} := concat({y}, {z}), then
SUM({x}) == SUM({y}) + SUM({z}).
COUNT == COUNT + COUNT
MAX == MAX (MAX(), MAX())

Not all aggregate operations can be decomposed in this manner, but many of them can. 

If you take this concept and then apply it a query with some joins:

select sum(col1) from a join b join c

Then the idea of local-global aggregation is that you can do part of the sum before joins and pass up the SUM for each group instead of all the rows from that group. 

This idea becomes more powerful when you start throwing more complex operations into a query processor, such as partitioning or parallelism.  Often, you want to perform partial aggregations on these groups to minimize the amount of data you have to send between threads or perhaps between nodes on a NUMA machine.  All of this is the fun stuff that makes databases interesting – you can

Not every aggregate can be pushed below every join – there are rules about what can and can’t be done and maintain the same results from a query.  For example, you may need to consider whether the aggregate function can handle seeing additional NULL values and return the same result or not.

If you go look at the SQL CLR user-defined aggregate definition, you’ll see the exposed pieces of some of this capability in the SQL Server query optimizer.  I won’t spoil all of your fun, but go take a look. 

Happy Querying!

Conor

Hunting for SQL 2008 SPARSE Column Easter Eggs

So I was playing today with the sparse column implementation in the latest SQL 2008 CTP, and I was specifically looking to see how this thing would work with the query processor.  I wanted to know how much information is exposed for the QP to make smart decisions when this functionality is used in complex queries with cost-based plan choices.  If the QP doesn’t have the information, then sometimes the query plans will be sub-optimal because, well, garbage-in garbage-out.  While the SQL Server QP does a tremendous job at making complex plan choices compared to some of the other commercial alternatives, there are still limits on what the Optimizer can model in a reasonable amount of time.  As such, there are seams where the product tends to not work as well as one would hope.  This will always be true.  While I suppose that will also keep me employable, it is useful to understand those limits because it will help you know where to look or, if it’s really hard, when to ask for help.

The SQL Server QP knows a couple of things about the data stored in a table in the storage engine:
1. How many physical pages it uses
2. How many rows it has in it (approximately)
3. Single-column statistics over a sample of the data
4. A basic notion of column interdependance to help in estimating queries with multiple scalar predicates.

From 1 and 2 it can derive the average row width.  That’s useful for determining things like “how big will my sort be” if the query needs to sort.  That’s a good thing – it leads to reasonable estimates for many choices in the QP.

So let’s add sparse columns into the mix.  Sparse columns are useful for data with lots of NULLs.  Often this is a result of a non-traditional third-normal form database problem or, perhaps someone who is not a database person not really trying to make something into a database problem early enough in its lifecycle.  The point is that commercial database systems have a sweet spot around handling data sets with known (and small) sets of columns that can be stored in tables.  There is a TON of expressiveness available in query processors that manipulate this data because this format of data is better supported than other formats.

None of this really means that your problem is going to easily fit into a nice third-normal form system.  Often there are legacy or performance concerns that push an application away from that sweet spot.  Over time, various technologies have tried to bridge that gap (property tables, XML, and object-relational mappings).  Each of them have their own reasons to be, and I don’t want to get into them in depth in my post.  I’m going to talk about how the QP deals with these from a modeling perspective.

I built two examples to explore how SQL Server 2008 reasons about sparse columns.  One example creates lots of traditional, nullable float columns while the other is exactly the same except that it uses the sparse attribute.

A few things I learned immediately:
1. Sparse columns don’t change the maximum number of columns you can create in a table.  On the surface, this seems unfortunate, since it will limit the kinds of applications that can use the feature. 
2. It does seem to use less space per row.  This isn’t hard, as the row format for SQL Server has a null bitmap and also needs 2 bytes per column to store the variable offset pointers.

create table sp1(aaa int)
create table sp2(aaa int)

declare @i int
set @i=0
while (@i < 990)
begin
declare @sql nvarchar(400);
declare @s nvarchar(20);
set @s = @i;
set @sql = 'alter table sp1 add col' + @s + ' float sparse'
exec sp_executesql @sql
set @i=@i+1
end

declare @i int
set @i=0
while (@i < 990)
begin
declare @sql nvarchar(400);
declare @s nvarchar(20);
set @s = @i;
set @sql = 'alter table sp2 add col' + @s + ' float'
exec sp_executesql @sql
set @i=@i+1
end
declare @i int
set @i=0
while @i < 20000 
begin
insert into sp1(col2) values (123.4)
set @i=@i+1
end

declare @i int
set @i=0
while @i < 20000 
begin
insert into sp2(col2) values (123.4)
set @i=@i+1
end

If we run “set statistics io on” and then run “select * from sp1″ and “select * from sp2″, you’d like to see some difference in IOs:

sp1:
(20000 row(s) affected)
Table ‘sp1′. Scan count 1, logical reads 86, physical reads 0, read-ahead reads 80, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

sp2:
(20000 row(s) affected)
Table ‘sp2′. Scan count 1, logical reads 20000, physical reads 1, read-ahead reads 19978, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Well, that’s good – the sparse format on largely sparse data saves space.  We can confirm that with a quick look into the system tables:

SELECT o.name AS table_name, au.type_desc, au.used_pages
FROM sys.allocation_units AS au
    JOIN sys.partitions AS p ON au.container_id = p.partition_id
    JOIN sys.objects AS o ON p.object_id = o.object_id
WHERE o.name in (N'sp1', N'sp2')

table_name                                                                                                                       type_desc                                                    used_pages
——————————————————————————————————————————– ———————————————————— ——————–
sp1                                                                                                                              IN_ROW_DATA                                                  87
sp1                                                                                                                              ROW_OVERFLOW_DATA                                            0
sp2                                                                                                                              IN_ROW_DATA                                                  20001

(3 row(s) affected)

We’ve now confirmed that we actually do have fewer pages.  This is also good.

Now let’s see how far into the QP this extends.  Does the QP model the costs for these two queries differently?

SP1 TotalSubtreeCost: 0.08824496
SP2 TotalSubtreeCost: 14.83936

And that, my friends, is a “good thing”.  This means that sparse columns are going to help your complex queries when you use a table with sparse columns in it.  The easiest way to implement this is to simply ignore the new feature in the QP, and obviously someone did a good job to make sure that it was costed properly. 

I don’t believe that there are additional statistical structures to tell the QP which things are on/off row.  This will show up in a small number of scenarios (similar to how LOB data can be on/off row).  This is outside of the model for how the QP reasons about plan cost, at least from what I’ve seen from SQL 2008 and from what was publicly said about 2005.

Thanks all,

Conor Cunningham

1753, datetime, and you

So now that I have the latest CTP working again on my main machine, it’s far less troublesome to go research things and post what I find.  Tonight I’ll talk a little about datetime vs. date, as dates are on my mind for whatever reason.

So the “old” SQL Server datetime type only goes back to 1752, which seems very odd until you remember that current notions of date and time are really not that old.  In 1582, Pope Gregory XIII fixed the calendar because they realized that the year was actually slightly off compared to what the calendar said – each year the date of Easter ended up getting slightly further from what was intended.

Modern computer science students (at least ones who did the assignments *I* did in school :) will remember the funky rule for calculating a leap year – it is a leap year if the year is divisible by 4 but not 100 except for when also divisible by 400.  The year 2000 was just such a case, so we had a leap year then.  2100 will not be a leap year.  They had different fancy math in the original Julian calendar (as in Caesar) with a special month that happened every 377 years or so.  Later they tried to fix this because they wouldn’t actually do the month every 377 years.  I promise – I’m not making this up.

By the time the figured it out, they were off by several days.  In modern terms, the OS shipped, everyone installed it, and it has a fatal bug that impacts every customer on upgrade to the hotfix :(.  So while today such bugs can put you out of business, back then the church had a bit more market power than the average customer might have today.  So, they decided to fix things.  So they changed the calendar and skipped 10 days in the process to fix the client.  Even worse, different clients (countries) installed the patch on different days, so they all changed dates differently.

It so happens that the British Empire (the “pink bits” on the old maps) adopted this in 1752.  That’s almost 200 years after the Pope did his thing.  So, since SQL Server was first done in the US, Jan 1. 1753 is the first legal date because all of the math before that is simply dizzying.  Alaska actually didn’t switch until the late 1800s since Russia controlled it before that, and the Orthodox calendar celebrates its Christmas as Easter later than those in the West because of this very issue. 

So if I try this with the old datetime type, I get:

create table datetbl(col1 datetime)
insert into datetbl(col1) values ('17510101')


Msg 242, Level 16, State 3, Line 1
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
The statement has been terminated.

So I figure I’ll try this out on the new date type:

create table datetbl2(col1 date)
insert into datetbl2(col1) values ('17510101')

(1 row(s) affected)

Well, the legal date range is listed as 0001 to 9999.  So let’s check the math and see what happens…

create table datetbl3(col1 date, col2 date)
insert into datetbl3(col1, col2) values ('17510101', '17530101')
select DATEDIFF(dd, col1, col2) from datetbl3

731

Well, it does not appear that they skipped too many days there.  They even added a leap year since 1752 is divisible by 4 but not by 100 except for 400.  So instead of 730 we get 731.

I looked at Books online, but so far I haven’t found any reference to the difference in the DATEDIFF function or in the discussion of valid ranges.  While I am sure that a number of people will not care that much and will mostly just want to use historical dates mostly with low precision, it’s still important to make sure that a core system does calculations correctly.

Perhaps they will add a comment into BOL or a warning for datediff and other function uses around the various switch points for dates.  Perhaps not.  However, it’s important for you, the programmer, to know the issues when dealing with old dates.

I originally learned about all of this stuff in detail when I reverse-engineered the SQL Server Expression Service Code while I was building SQL Server for Windows CE/SQL Mobile/SQL Server Compact Edition/whatever it is called now.  I’ve found this area to be fascinating because it takes something we take for granted and just slaps you in the face a few times.  I hope you have a bit more insight now as to the 1753 date limitation and perhaps will go crack open that history book :).

Thanks,

Conor Cunningham

SQL 2008 CTP Installation Problems – Largely Fixed

Well, I was able to get things to finally install using the information from my previous post and using a named instance (different from the default instance I used previously).  So, while I’d still like to track down the keys that define an installation instance with enough detail to remove them, I think I’ve gotten close enough that others should be able to use this as a template to avoid an OS re-install.

I’d like to extend a HUGE thank you to the Microsoft SQL Server Setup Dev Team who spent time looking at this problem and providing a workaround for me.  Please accept my gratitude.

I am sure that they are hard at work getting setup ready to RTM, which will include problems like the one I’ve seen. 

So, please post up if you have other questions about what I did – I’ll answer them if I can.

Thanks,

Conor

Progress on the SQL 2008 CTP6 Installation Problems

You may recall my previous posts on my trouble with SQL 2008 CTP6.  I’ve made some progress on fixing my machine that I thought I’d share with you.  I now get past the following error:

D:\temp\sqlctp6\servers>setup
The following error occurred:
MsiGetProductInfo
failed to retrieve ProductVersion for package with Product Code =
‘{00E75F61-A126-4CE1-90B8-42295052F1AC}’. Error code: 1605.

I useded the SysInternals (err, Microsoft) Process Monitor Tool and watched for keys found/not found during the failed install.  This found a few keys in HKCR\Installer\UpgradeCodes that were being found early in the setup100.exe process.

(Fair notice – modifying the registry on your computer is your problem, not mine :).

So take the key:
00E75F61

Reverse it.  I think I had 615fe700, but it was late and I was tired.  It might have been 16f57e00.  Anyways, you will see some keys under HKCR\Installer\UpgradeCodes.  There are actually 2-3 places in the registry searched for each key  I’ve been killing all three of them for each key – there are about 7-10 keys.  The registry section looks like this:

The hex digits you have in the error will correspond to the right hand side of this picture.  The key I’ve been deleting is it’s parent, which is the key being opened in the “key found”/”key not found” stuff in the process monitor log.

Here’s what the log looked like for me:

136791  11:53:49.4922341
PM      setup100.exe   
2812      
CloseFile             
D:\temp\sqlctp6\tools\setup\sqlrun_bids.msi               
SUCCESS             

136793  11:53:49.4923712
PM      setup100.exe   
2812      
WriteFile             
C:\Program Files (x86)\Microsoft SQL Server\100\Setup
Bootstrap\Log\20080313_2353\WOPR_20080313_2353_Detail_ComponentUpdateSetup.txt
SUCCESS               
Offset: 7,591, Length: 62

136795  11:53:49.4956628
PM      setup100.exe   
2812      
RegOpenKey               
HKLM\Software\Microsoft\Windows\CurrentVersion\Installer\Managed\S-1-5-21-2888934283-224128331-3030229123-1000\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4         
NAME NOT FOUND         Desired Access:
Read

136796  11:53:49.4957796
PM      setup100.exe   
2812      
RegOpenKey               
HKCU\Software\Microsoft\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4     
NAME NOT
FOUND               
Desired Access: Read

136797  11:53:49.4958199
PM      setup100.exe   
2812      
RegOpenKey               
HKCR\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4
SUCCESS             
Desired Access: Read

136798  11:53:49.4958553
PM      setup100.exe   
2812      
RegEnumValue               
HKCR\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4
SUCCESS             
Index: 0, Name: 16F57E00621A1EC4098B249205251FCA, Type: REG_SZ, Length: 2,
Data:

136799  11:53:49.4958960
PM      setup100.exe   
2812      
RegOpenKey               
HKLM\Software\Microsoft\Windows\CurrentVersion\Installer\Managed\S-1-5-21-2888934283-224128331-3030229123-1000\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4         
NAME NOT FOUND         Desired Access:
Read

136800  11:53:49.4959398
PM      setup100.exe   
2812      
RegOpenKey               
HKCU\Software\Microsoft\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4     
NAME NOT
FOUND               
Desired Access: Read

136801  11:53:49.4959604
PM      setup100.exe   
2812      
RegCloseKey               
HKCR\Installer\UpgradeCodes\87674BD65E9A5D1409951D671E37BDA4
SUCCESS             

136802  11:53:49.5146648
PM      setup100.exe   
2812      
RegOpenKey               
HKLM\Software\Microsoft\Windows\CurrentVersion\Installer\Managed\S-1-5-21-2888934283-224128331-3030229123-1000\Installer\Products\16F57E00621A1EC4098B249205251FCA       
NAME NOT FOUND         Desired Access:
Read

Now the installer gets past this and tries to install the engine and then fails, but I will call this progress ;).

Notice – I had deleted all of my physical files for SQL Server from the machine, so killing the registry keys seemed like a reasonable next step.  I can’t promise you it’s a good idea since I don’t have things working yet, but I hope this helps the many of you who mailed me and found me via search engines.

Thanks,

Conor Cunningham