This post is an example of the "bring your own problem" approach in the classroom, inspired by upcoming SQLskills Immersion Events in August. See previous posting for the reason behind the blog post title.
The SQL Server "MAX" data types (VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX)) have been around for some time (SQL Server 2005) now. And the data types that they are meant to replace (TEXT, NTEXT, IMAGE) have been deprecated for just as long. The old data types somehow made their way in SQL Azure (sorry, Windows Azure SQL Database now), I was kind of shocked about that. It's a best practice to use the new data types.
A student in one of my classes once reported, at the "best practice" point in the proceedings, that changing the database from IMAGE to VARBINARY(MAX) had broken an old application. They were (and wanted to continue to) use the new ODBC driver. Is there any way to use the new data types with old apps? He wasn't using the obvious culprits (READTEXT and friends) in the application.
The problem ended up being with the application's use of column metedata. The application was using that metadata to allocate a buffer based on the maximum size of the data type. IMAGE had reported this maximum size as 2GB, VARBINARY(MAX) was reporting -1. The reason for the -1 was because, in SQL Server 2008, VARBINARY(MAX) column with the FILESTREAM attribute can be larger than the usual 2GB limit. It actually reports -1 in drivers dating back to SQL Server 2005, guess they were planning for the future then.
Fortunately, there is a way to have the driver use the SQL Server 2000 type system and return 2GB as maximum size for the MAX data types. In ODBC, this setting is a connection string parameter "DataTypeCompatibility=80", in ADO.NET it's "Type System Version=SQL Server 2000". In answering a question on a mail list recently, I was surprised to find out that the latest JDBC driver does appear to lack such a setting.
Does everybody get that?
Looking forward to seeing you in August…