When auto-update statistics kicks in, you are correct in that we cannot influence the sampling algorithm. Regarding steps – 200 sometimes does not go very far (and it doesn’t even need to be a very large table for this to be the case).
Filtered stats can be helpful, but if you are using parameterization or other techniques where the qualifying value is not known at runtime, those filtered stats may not actually be used. Regarding your comment on partitioning, this isn’t inherently helpful in this case, unless you are using in conjunction with filtered stats, which is what I think you were referencing.
Another option is to see if manual updates of stats with higher sampling percentages may prove helpful. Sometimes it can, but you need to test it out.
Best Regards,
Joe
]]>When the sampling kicks in, am I correct in thinking there is no way of influencing the sampling algirthm. For example, if you have billion row tables with highly varaible and slwed data, 240 histogram steps does not go very far. So if you know that there are values of importance iit would be nice for the step boundaries to fall on these popular values. I suspect that if you want to do this filtered statistics and partitioning is the way to go ?, is this correct ?.
Regards,
Chris
]]>