SSMU Web Seminar: Indexing for Performance
Part III: Indexing for Aggregates/Indexed Views

Kimberly L. Tripp

 

Q & A

 

Q: Indexed Views seem overly complex – are they every useful?

You’re correct! There are a TON of restrictions and often you’ll want to avoid indexing a view when it creates too much duplicate data that needs to be updated or when it creates a hot row scenario (like an aggregate on country) BUT these can also more drastically than any other index – increase your performance… You should do some testing with these and always remember INSERT/UPDATE and DELETE performance as well. This is really another form of covering and as I mentioned in Part II – not everything should be covered!

 

Q: Can we Index a View based on a View?

We were right at 60 minutes and I completely spaced on the answer to this one when you asked but the answer is a simple NO! You cannot create an index on a view which contains views or functions. Check out the “Indexes on Views on Views - NOT Allowed.sql” script to generate the following error:

 

Server: Msg 1937, Level 16, State 1, Line 1

Cannot index the view 'Credit.dbo.test2'. It references another view or function 'dbo.test'.

 

Q: I assume indexed views referencing a table that is bulk loaded must be dropped before a bulk load to achieve a fast bulk load?

The performance of a bulk operation is impacted by many factors – only one of which is existing indexes. Yes, it’s generally true that the fastest load is achieved by loading into a completely empty heap BUT that’s not always possible. Depending on the percentage of the data loaded and the total number of indexes you really have two options:

1)      Drop all indexes, load data, recreate indexes

Best when a large percentage of data (relative to the table) is being loaded

2)      Leave indexes in place, load data, possibly rebuild indexes after load.

Best when a small percentage of data (relative to the table) is being loaded.

In addition to indexes, the logging mode can significantly impact the performance of a bulk operation. A database logs operations based on the recovery model chosen for that database. Choosing a recovery model is not something to chose randomly – you should spend a bit of time reviewing the books online BUT setting the recovery model to BULK_LOGGED for the duration of the bulk operation can significantly reduce the logging overhead and time of the operation. More than anything you should do some testing. Also, consider checking out some of the past presentations on www.SQLDev.net. Gert Drapers has a few sessions on High Performance Data Loading.

 

Q: Where can we get all of these scripts/demos, etc.?

As an attendee you have already received an email which should have included the sample scripts BUT I also keep them on www.SQLSkills.com under Resources, Past Events or Events, Past Events. All of the demo scripts are there and available!

 

Resources:

Whitepaper: Improving Performance with SQL Server 2000 Indexed Views http://msdn.microsoft.com/library/en-us/dnsql2k/html/indexedviews1.asp?frame=true

 

Search SQL Server Magazine’s website http://www.sqlmag.com

 

Thanks for the great questions everyone!

You guys were quiet this week; I actually kept up with most of your questions… J

 

See you in Part IV,

Kimberly

 

Kimberly L. Tripp

President, SYSolutions, Inc. www.SQLSkills.com

Principal Mentor, Solid Quality Learning www.SolidQualityLearning.com