In the editor, the ReadOnlyVariables and ReadWriteVariables in SQL Server 2005 required you to type in the variable names. You can still do this, but I always ask, why type when you can select an item from a list? It saves keystrokes, and more importantly, ensures a correct value! In SQL Server 2008 there is an ellipsis button to click which opens a Select Variables dialog box which displays all the variables in the System and User namespaces. The downside (at least in the November CTP in which I most recentely tested this feature) is that this list seems to be a bit random and you can’t sort by name or type.
The PrecompileScriptIntoBinaryCode property in SQL Server 2005 is removed in SQL Server 2008. The removal of this property means you no longer get the choice of whether to precompile the code before package execution (which gets better performance at the expense of a larger package) or to compile just-in-time.
The default behavior in SQL Server 2008 is to precompile your script. When you’re editing the script, a set of VSTA project files are created (or re-opened), but they are deleted from your hard disk and persisted only in the package.
Edit Script button (formerly known as the Design Script button)
This is such a little change, but it means a lot to me and I’m sure to a lot of others. While intellectually I understand that it doesn’t require a lot of physical effort to click the mouse a few extra times, I invariably am annoyed when I feel I’m clicking much more than necesary to accomplish a task or to reach a destination. With regard to the Script Task and Script Component, I appreciate that the Edit Script button is now on the first page of the editor. Hooray! What really happened here was the order of the pages was changed. In SQL Server 2005, the pages in the Editor are General, Script, and Expressions whereas, in SQL Server 2008, the pages are Script, General, and Expressions. Microsoft acknowledged that the most frequent reason people open the editor is to edit the script and thus shortened the path to get there.
Access to External Assemblies
If you want to add a reference to a DLL in your script in SQL Server 2005, the VSA interface limits access to external assemblies in the Windows\Microsoft.NET\Framework\v2.0.50727 folder on your development machine and to GAC or this folder on the machine running the package. The Add Reference dialog presents a list of DLLs which you can add to your script, but there is no way to browse to another location where you might prefer to (or must) store your DLLs. Furthermore, there is no way to add a Web reference if you want to interact with a Web service. Instead, you have to create your own proxy class first, register the class on the dev and production machines, and then reference that class in your script. Not impossible, but lots of extra steps required.
In SQL Server 2008, the VSTA interface in the Script Task and Script Component makes development much easier. First, the Add Reference dialog box has all the tabs you’d expect in a full IDE – .NET, COM, Projects, Browse, and Recent. The key point here is you can browse to the location of the DLL you want to use now. Second, you have the ability to add a Web reference and thereby get the proxy class created automatically for you without having to go through all the steps required to get this behavior in SQL Server 2005. Jamie Thomson has posted a video demonstrating how to consume a Web service with VSTA in SQL Server 2008 Integration Services which you should check out if this sort of functionality is useful in your environment. –Stacia
One of the benefits of SQL Server 2008 Reporting Services is the removal of the dependency on IIS. This architectural change is huge because it opens the door to Reporting Services for those IT shops that wouldn’t allow installations of SQL Server components on a Web server. Plus it removes one more layer of potential configuration problems to troubleshoot when connectivity issues arise. This architectural change also affects the steps we normally follow to configure the server. So I decided today to do a little exploring to see what’s different.
Now, it’s important to note the Juy CTP doesn’t provide the complete configuration and management experience, because after all Katmai is still a work in progress. But enough is there for me to start poking on the parts to see what’s new and different. After installing Katmai on a Windows Server 2003 server, the Reporting Services Windows service is running under the Network Service account. I have not installed IIS (and won’t) but the default installation sets up my report server databases so I should be good to go. No dependency on IIS means I don’t have to set up application pool identities. As an initial test, I try to access Report Manager using the standard URL (localhost/reports) and see that it displays the Home page so everything is working under the default configuration. So far, so good.
Next, I want to peruse the Reporting Services Configuration to see what the current settings are, so I open the configuration tool and connect to the local report server instance. I can immediately see a difference in the configuration tool layout (including the absence of little green or red buttons to indicate whether that page has been configured):
I only need to have one service account configured for the Reporting Services service, rather than configure one for its Windows service and another for the Web service. On the Service Account page of the configuration tool, I can change to a different built-in account (which I don’t recommend) or to a domain account (which I do recommend).
Although IIS is no longer required, you still need to configure an IP address, TCP port, a URL, a virtual directory, and optionally an SSL certificate to create a URL reservation. A URL reservation is the mechanism by which http.sys – the operating system API required to run the Web service without IIS – allows users to access a specific URL. You can configure these settings on the Web Service URL page. An Advanced button on this page displays a dialog box that I can use to configure a variety of IP addresses, ports, host headers, and SSL ports if necessary. (I’ll delve into your options with URL reservations in more detail in a future blog.) When you apply the configuration settings, the applicable URL reservations are created. If you’re curious about how http.sys enables applications to run without IIS, see this article from MSDN Magazine.
The Database page of the configuration tool does what it always did. You can create a new report server database or connect the current report server instance to a different existing report server database or even switch to a different mode (native or SharePoint integrated). The main thing different here is the interface. Configuring the database now is a series of pages in a wizard. I’m not sure whether this is a good thing or not – no opinion, really – but for one thing. One you walk through the wizard, there’s no option to save the database script. I hope this is fixed in a future CTP as I have a current client that always has one group build the report server and the database needs to be built on a separate server to which that server-building group doesn’t have permissions needed to execute the script. So we generate the script and hand it off to the DBAs. This omission is pretty big in the product, but as I mentioned earlier, Katmai is a work in a progress. I’ll keep my eye on this one.
The next page is the Report Manager URL which you use to set the virtual directory for, well, Report Manager. I’m not sure why this isn’t positioned after the page for the Web Service URL. It doesn’t matter in the grand scheme, but it feels out of place here to me. This page also includes an Advanced button allowing you to set up IP address, ports, host headers, and SSL ports if needed.
The Email Settings page hasn’t changed from SQL Server 2005’s configuration tool. All it does is allow you to put in a sender address and set up a SMTP server, so I wouldn’t expect a change here. However, if I could submit a wish, it would be nice to have other configurable settings for SMTP on this page. Currently in SQL Server 2005 – and it appears this won’t change in Katmai – you have to change the configuration file for important properties like SendUsing or SMTPAuthenticate (see RSReportServer Configuration file for other SMTP-related properties).
The Execution Account settings page hasn’t changed either, but I can live with that. There’s nothing I would change. J Similarly, the Encryption Keys page hasn’t changed functionally, but the UI has been modified to include test to better explain each option (Backup, Restore, Change, Delete).
The Scale-out Deployment page is the new name for SQL Server 2005’s Initialization page and, in my opinion, is a better name for it. I don’t currently have an environment set up to test setting up a report server farm, so I can’t comment on what differences you might find here, but I would not expect much different from the SQL Server 2005 experience. If I find otherwise in the future, I’ll blog about it.
Setting the authentication method no longer occurs in IIS, obviously. Now authentication configuration happens only in configuration files which I’ll be exploring in much greater detail in a forthcoming blog (because it’s near and dear to my heart at the moment since I’m speaking on SSRS and authentication configuration next week at SQL Server Magazine Connections – I need to find out what changes in that presentation once Katmai releases!). For now, you should be aware the default configuration of Reporting Services requires users to have a Windows domain account. Authentication is set to Negotiate, much like IIS, which will use Kerberos if it’s enabled or NTLM if Kerberos is not enabled. You can force Kerberos only or NTLM only by changing the report server configuration. Alternatively, you can use Basic authentication (although this feature will come in a future CTP) or Anonymous authentication if you’re adding in custom security like forms authentication. Note that the report server will reject anonymous authentication unless you are explicitly using custom security. Also, Single Sign-On (SSO), Passport, and Digest authentication will not be supported. More to come soon! –Stacia
Once upon a time, there was such a thing as a talking car. I never owned one, but I did get to drive one for a week in Quebec while a colleague and I were working with a client up there back in the late 80s. Normally, we were supposed to rent a compact car when out on business, but we had to pick up a bunch of computer equipment at air cargo and there was no way our luggage and the equipment was fitting into a compact. As it turned out, the only car that accommodated us was a New Yorker (and even then it was pretty tight). We quickly discovered that the New Yorker was one of those talking cars – with a French male voice. We named him Pierre and proceeded to try out things to see what he would say and add to our French vocabulary while we were at it. I don’t think we had an owner’s manual to simply peruse the list of errors we could commit (and should presumably avoid) for which Pierre would gently scold us. As time has shown, demand for Pierre and his counterparts simply didn’t hold up in the market. Maybe people can accept warning lights, but not a warning voice?
In SQL Server 2008, the cube and dimension designers in Analysis Services now come with best practice design warnings, but fortunately Dev Studio doesn’t read them aloud to you. A visual indicator – which I’ll call the blue squiggly – will appear on screen to highlight the offending object. The first warning you’re likely to see when you create dimension is associated with the dimension object at the top of the attribute tree. This warning says (in the July CTP), “Create hierarchies in non-parent child dimensions.” As soon as you create a user hierarchy, the blue squiggly goes away, right? Nope… now you probably have a new warning on the dimension object if the attributes you selected are all visible – “Avoid visible attribute hierarchies for attributes used as levels in user-defined hierarchies.” And the hierarchy object now probably has a blue squiggly to let you know that there are no attribute relationships defined between one or more levels in the hierarchy. (Remember this is a brand new dimension).
Don’t worry about more warnings appearing as you do your design work. Just go about your normal business, and hopefully all will clear up before you’re ready to deploy the project. Many of the 48 warnings (in the July CTP) are well-known best practices to experienced Analysis Services developers. So what’s the point of including best practices if they are so well-known? Well, not everyone implementing SQL Server for the first time has access to experienced developers, so their experience will be much more positive with Analysis Services if they are warned about the pitfalls before they fall in.
Rather than haphazardly try out something to see whether or not it conforms to best practices, as I did with Pierre, you can jump straight to Books Online to see complete list of the warnings (including links to more information about each). Search for the topic, “Design Warning Rules.” The warnings are organized into categories (in the July CTP BOL) as follows: Dimensions, Aggregations, Partitions, Attributes and Attribute Relationships, Measures and Measure Groups, User-defined Hierarchies, ROLAP and MOLAP storage, Data Providers, and Error Handling. Some warnings come with better explanations about best practices than others. I hope this will improve over time, because for the unitiated these warnings without explanation are little more than “because I said so” instead of the educational opportunity it could be.
Like Pierre’s reminders that we were doing something contrary to the established best practices of driving, the Analysis Services design warnings are there to alert you to potential hazards, but won’t stop you from ignoring them. For example, I’m not certain that I agree that one should always “Avoid visible attribute hierarchies for attributes used as levels in user-defined hierarchies.” This is a matter best decided in conjunction with users, in my opinion, after explaining the pros and cons of this approach. Some implementations may not have this luxury, in which case I would defer to the best practice recommendation.
Some best practices earn a chuckle from me, such as “Define a time dimension.” I have yet to meet a cube without one. I had a student insist once that they had seen one, but when pressed could not describe the purpose of the cube. I’m still waiting for a cube without a time dimension. I’m not saying it’s not possible, but I can’t imagine why you would want one as time-series analysis is one of the most compelling reasons to build a cube in the first place.
Some best practices contradict default values for dimensions (in the July CTP), which also amuses me, such as “Change the UnknownMember property of dimensions from Hidden to None” or “Define attribute relationships as ‘Rigid’ where appropriate”. It seems to me the Analysis Services dev team could easily make the change for default values to accomodate these best practices, as they did with “Do not ignore duplicate key errors. Change the KeyDuplicate property of the error configuration so that it is not set to IgnoreError”. To clarify, in SQL Server 2005, the default KeyDuplicate property value is IgnoreError, but this is changed to ReportAndStop in SQL Server 2008.
As mentioned earlier, before you deploy your project, you should clean up – to the extent you wish – the current warnings in your project. Warnings won’t stop your deployment, but you should make a conscious decision whether to ignore the surfaced warnings. A comprehensive list of all warnings in your project can be found in the Error List window (which you can open with Ctrl+E). Double-click on an error to access the designer and fix the problem. Alternatively, you can right-click the error and click dismiss to clear it off the list if you don’t intend to fix it. You can even add a comment to document your reason for ignoring this error. This method of clearing the error is instance-based and will not clear the same error if it’s found in a different dimension or cube. To globally dismiss a particular type of error, whether proactively before you start development or after the fact, you can access the new Warnings tab in the Database editor (which you can open on the Database menu by clicking Edit Database). Incidentally, the Warnings tab also contains a list of the warnings dismissed individually and the related comment.
All in all, I think this is a nice feature in SQL Server 2008 Analysis Services, particuarly for the many folks out there who are just getting started with this technology. Just as long as the warnings stay visual. As much as I like technology in general, I still don’t think I’m ready for Dev Studio to start talking to me like Pierre and I suspect many other people feel the same way. –Stacia
In my previous post, I covered the new dimension wizard and mentioned there were options for creating time dimensions that I would cover later. Now I’ll explain those options further.
Time Dimension Options in SQL Server 2005 Analysis Services
Let’s start with a quick review of what happens in SQL Server 2005 (referred to as Yukon hereafter). On the Select the Dimension Type page of the dimension wizard, you can choose Standard, Time Dimension, and Server Time Dimension.
If you select Time Dimension, you identify the time table in your DSV and then you map your time columns to the Analysis Services time properties. For example, you map a CalendarYear column in your time table to the Year property. This association of a table column to a property helps your MDX queries how to handle time-related functions like YTD or PeriodToDate. I admit I find this mapping process tedious, but necessary. When you use a time dimension table, you have to manage the processing of the Analysis Services dimension to add new time members if you incrementally add members to the table (instead of populating it well into the future as some people prefer to do). The benefit of this approach is the ability to include time attributes that mean something to your industry, such as a flag for a holiday or weekend versus weekday. You also can confirm inclusion of hierarchies based on the columns you map to time properties. I never liked the inability to change the hierarchy names here, but that’s just a nit. You can, of course, add your own hierarchy or modify the hierarchy name later in the dimension editor.
If your time-related analysis is pretty simple and you don’t want to manage a time dimension in your data source, you can create a Server Time Dimension instead. This is a pretty handy feature that lets you define a date range for the dimension, the type of attributes you want to include (year, quarter, month, etc.), the calendars to support (calendar, fiscal, etc.). You can confirm the default hierarchies just like you can with a table-based time dimension. The generated dimension includes several additional attributes, such as Day of Month and Day of Year, and there are other optional attributes you can add using the editor, such as Day of Week or Trimester of Year. When you want to add the Server Time dimension to a cube, you have to add it on the Cube Structure page of the cube designer because the cube wizard doesn’t have a way for you to add it there. You still need to ensure the end date of the range of your Server Time dimension is equal to or greater than the maximum date in your fact table.
There is a third option available. You could generate a time table by selecting the option to build the dimension without a data source. Select the Date template and you’ll get a similar interface as that for the Server Time Dimension. When you complete the dimension wizard, you have the option to generate the schema on the spot or you can run the Schema Generation Wizard at a later time. The Schema Generation Wizard lets you create a new data source view for your time table or select an existing DSV. You can even choose to have the wizard populate the time table or you can leave it empty. You’re also given the opportunity to specify naming conventions. By the way, your credentials are used to create the database objects so you’ll need to be sure you have the correct permissions on the data source. This is a nice way to get started with a time table but you’ll need to keep it up-to-date with an ETL tool and you can’t customize it to have more or fewer columns.
Time Dimension Options in SQL Server 2008 Analysis Services
As I mentioned in my previous blog entry, SQL Server 2008 Analysis Services (which I’ll call Katmai from now on) gives you four choices for creating a dimension:
· Use an existing table
· Generate a time table in the data source
· Generate a time table on the server
· Generate a non-time table in the data source (using a template)
The second and third options relate specifically to a time dimension. If you select “Generate a time table on the server,” you get the same result as the Server Time Dimension in Yukon and an almost identical interface in the wizard. The exception is that the hierarchy confirmation page is missing in Katmai – which is fine for me as I’d rather fine-tune the hierarchy in the editor anyway.
The new option, “Generate a time table in the data source,” is the same as Yukon’s option to build the dimension from the Date template. You run the Schema Generation Wizard to design the table and optionally to populate it the first time.
So what do you do if you have a time dimension in your data source? The only option is to choose “Use an existing table.” On the Select Dimension Attributes page of the dimension wizard, which you use to select the columns from your table to include in your dimension, you have the ability to change the attribute type. This is the equivalent of mapping columns to time properties in Yukon, although I must say the interface is not ideal for this task. In short, more clicks are required to set up your time dimension from a table, but you have the benefit of getting a table and all the attributes exactly the way you want them.
So, functionally, nothing has really changed much for time dimensions in Katmai apart from renaming of options and some slight interface adjustments. If you base your time dimension on a table, the interface changes make the process to create the time dimension in the Analysis Services database a bit more tedious, in my opinion. Fortunately, these aren’t tasks that you have to repeat every day and, if you want to reproduce the time dimension in another cube, you can always script it out rather than build it through the wizard. –Stacia
While there are several new features slated for Analysis Services that haven’t been released in a CTP yet, the July CTP does include a new dimension wizard. This wizard is intended to simplify your work by streamlining the steps involved to set up a new dimension. Today, as I walk you through the new wizard in SQL Server 2008 (which I’ll henceforth call Katmai throughout this article), I’ll explain how it’s lived up to the promise of an improved design experience and remind you how it’s different from the dimension wizard in SQL Server 2005 (which I’ll refer to as Yukon).
Of course, before you can add a dimension, you need to add a data source and data source view (DSV). Nothing has changed here. If an attribute doesn’t exist in the format you want in the physical data source, you will still need to add a named calculation (or a derived column in a named query) in the DSV before you add the attribute to the dimension. For example, if you have FirstName and LastName columns in a customer dimension table, but want to display “LastName, FirstName”, you’ll need to concatenate the columns in the DSV.
Step 1: What’s the source for your dimension?
Once the DSV is just right, you can kick off the dimension wizard. In Yukon, the first main page of the wizard is “Select Build Method” which gives you two choices for creating a dimension. I use the bottom-up approach most often – that is, I build the dimension using a data source which in turn is associated with a DSV which includes one or more tables for the dimension. Alternatively, there is the top-down approach, or more officially “Build the dimension without using a data source” which lets you describe the design and generate a table schema in your data source. If you leave the Auto build check box selected, then the wizard recommends the key column for the dimension and looks for hierarchies (although my experience with auto-detected hierarchies has been inconsistent). You then click Next to select the DSV and click Next again to specify whether you’re creating a standard dimension, a time dimension based on a table in your DSV, or a server-based time dimension. To recap, not counting the welcome page of the wizard, you go through three pages of the wizard to define the type and source of the dimension you want to build.
In Katmai, the three pages have been consolidated into one page – Select Creation Method – which gives you the following choices:
The first and fourth options are equivalent to the options you have in Yukon. I’ll discuss the time table options in a future blog entry. For now, I’ll continue through the wizard using an existing table.
Step 2: Which table is the main dimension table and which are the key and name columns?
In Yukon, on the Select the Main Dimension Table page of the wizard, you first select the dimension table (or the most granular table in a snowflake schema). Then you select one or more key columns in the table to uniquely identify each dimension member. Optionally, you select a column to represent the member name.
In Katmai, the only change here is that one page – Specify Source Information – allows you to select the DSV and the dimension table selection. You also specify the Key and Name columns on this page. The interface is slightly different if you want to use a composite key – using a drop-down list instead of check boxes. I think this will wind up requiring more mouse movement than the previous interface, so I’m not wild about this last change, but practically speaking I rarely use composite keys in a dimension so it’s probably a negligible change.
Step 3: Which columns are dimension attributes?
On the Select Dimension Attributes page (the next page in both Yukon and Katmai), a list of all remaining columns displays. In Yukon, you won’t see the key or name columns in this list, but in Katmai the key column is included in the list. In Yukon, all attributes are selected by default (if you kept Auto Build enabled) whereas in Katmai only the key column is selected by default.
In Yukon, you see the same column name in the list’s columns labeled Attribute Key Column and Attribute Name Column. I liked this feature to update name columns for snowflaked schemas. Unfortunately, this feature goes away in Katmai. You’ll have to update the name column in the dimension editor directly. Not the end of the world, I suppose, but it’s a feature I use enough to really notice it’s missing.
Katmai adds another feature to this page which I’ll concede compensates for the inability to specify the Attribute Name Column. Specifically, there is a Enable Browsing check box for each attribute. This is a nice quick way to quickly and efficiently set the AttributeHierarchyEnabled property to False which means the attribute can’t be placed on an axis in a query (i.e. you can’t put it in rows or columns or in the filter). Disabled attributes are useful for things like phone numbers or addresses – you don’t really analyze this information but your client application can make it available to the end user as a tooltip, as an example. On this page, you can also specify the attribute type, although I don’t know too many people who actually use this often for non-time dimensions.
Step 4: What is the dimension name?
The final page in Katmai allows you to name the dimension and you’re done after going through a grand total of four pages! Before I get to this point in Yukon, I have to specify a dimension type (which is usually Regular), define a parent-child hierarchy (which should self-detect anyway and which I avoid whenever possible), two pages for hierarchies (detecting and reviewing) and then I reach the final page to give the dimension a name. For a standard dimension with auto build enabled in Yukon, you have to go through ten pages. That’s quite a difference and therefore Katmai considerably streamlines the basic development of a dimension with the new dimension wizard.
There are a few more Analysis Services features in the July CTP that I’ll review in future blog entries. Check back soon! –Stacia
While you can tune the caching for the lookup to get optimal performance, that cache goes away. So if you have a second package that needs to lookup to the same reference table, you have to pay the overhead cost of loading up the cache (if you’re using it) again. That’s where SQL Server 2008 comes to the rescue with “persistent lookups.”
The new Lookup transformation hasn’t been made available yet in a public CTP, so I haven’t had a chance to benchmark the caching, but I think the improvement of persistent lookups shows promise. Here’s my understanding of this feature (which of course is subject to change before SQL Server 2008 goes RTM):
One big change to caching in general is that you can have a cache larger than 4GB, on both 32-bit and 64-bit systems which should help the scalability of your Lookups.
Let’s explore the idea of a persistent cache further. Obviously, it’s not useful when the reference dataset is highly volatile. But for a relatively stable reference dataset, it’s got possibilities. Essentially, you start the process by populating the cache. You can do this in a separate data flow from the one containing your Lookup transformation. The cache stays in memory to be used by as many Lookups as you like until the package stops executing.
But wait – there’s more! As an alternative, you could populate the cache in its own package and store it in a .caw file. Then that cache file can be used in as many other packages as needed. Here’s the resusability factor that was missing in SQL Server 2005. The .caw file can be read faster into memory than reading in a table, view, or query for a full cache. It’s like the Raw File you can use for asynchronous processing and optimized for reads by Integration Services. In fact, you can use the Raw File Source to load the cache contents into a data flow, do whatchya gotta do to the data, and land the results someplace else if you use the data for more than just Lookups.
Another benefit of storing the cache in a file is the ability to deploy a package to multiple environments and ensure the contents of each cache are identical. Simply add it to the Miscellaneous folder and the package deployment utility will include it in the set of files to be transferred to each target server.
An important change with partial caching is the ability to store rows in the cache that don’t match rows in the data flow. You will even be able to specify how much of the cache you want to devote to this purpose.
Considering a considerable part of the ETL process in data warehousing depends on the Lookup transformation, it’s good to see this part of Integration Services has received attention in the next release of SQL Server. Watch for the new Lookup transformation, Cache transformation, and Cache connection manager in a future CTP. –Stacia
As I was slogging through updating reports for a client recently, I was really wishing I could fast-forward in time, install SQL Server 2008 in my client’s environment, and use the new report designer demonstrated by Jason Carlson (Product Unit Manager of Reporting Services at Microsoft) at PASS 2007 in Denver a few weeks ago. In particular, the 2000/2005 report designer interface isn’t very friendly when you want to work with matrix subtotal properties. When I teach Reporting Services classes, I often ask students to come up with a name for the green triangle that is the single point of entry to the Subtotal properties. No one has yet come up with anything better than “that green thingie…”. Somehow that strikes me as so much more amusing than the more matter-of-fact “green triangle.” (But then I’m easily amused…) I’ve been creating reports in Reporting Services for at least 4 years now and still have yet to master the precise click-motion required to nail that green thingie the first time. Compound that with trying to accomplish this feat over a Remote Desktop Connection and I was quickly frustrated before I had finished updating the first report! Only twenty more to go…sigh.
While Katmai won’t help me with my current problem, I am delighted to see that it will resolve one of the most frustrating aspects of working with the matrix data region. I haven’t done an official count, but I do believe I use a matrix much more often than a table, so my encounters with the green thingie are more numerous than I would like. I suspect many other people are frequent users of the matrix and therefore feel my pain. Fortunately, our collective frustration goes away as soon as Katmai releases (and we can convince everyone to make the leap right away – I AM an optimist after all!).
In fact, not only does the matrix improve, the whole report designer interface changes. The version that Jason showed at PASS was a separate client tool – outside of Visual Studio, that is – which he said will ultimately be merged with Report Builder (but don’t expect that to happen in the Katmai release). The beautiful thing about the report designer is that it’s less intimidating to non-developer types than the Visual Studio report designer interface. The Properties window is still there for the hard-core folks. For everyone else, a right-click will get you what you need. There’s also definitely an Office 2007 flavor to the new designer, including ribbons. Business users who are responsible for report development will LOVE this tool.
I have to say it all looks pretty, but my personal favorite is the disappearance of the green thingie. Nothing personal, but good riddance. Subtotal areas in a matrix will now have a place right alongside every other object you place in the designer. And you can use subtotal areas a lot more flexibly, too, because a matrix is no longer really a matrix. Now it’s a tablix. (Is that tay-blicks with a long a or tab-blicks with a short a…..?) More about tablix in a future blog entry. Watch for the new report designer in a future CTP release. –Stacia