Disk space is cheap…

THAT'S NOT THE POINT!!!

The simple point is that bad (or sloppy/lazy) design cannot be tuned. If you think that data type choice, nullability, keys – don't really matter – you won't scale. It is possible that you may completely fail because of this. Have you ever heard (or possibly said?), let's just get this done – we'll worry about performance later? If you haven't heard it, I'm surprised! I hear this all the time…

Yesterday I gave a lecture at SQLPASS about GUIDs. It wasn't the most well attended (under 200 people) but I suspect that's because of two things: first, our good friend Bob Ward was speaking at the same time (and there were actually a bunch of really good sessions!) AND the simple fact that GUIDs aren't sexy (I agree!). Also, I think that a few folks may have presumed that what I was going to talk about (maybe even solely talk about?) was fragmentation. And, while fragmentation is the most outwardly visable problem with GUIDs – it's by NO MEANS the only problem. And, so I thought I'd blog a few things to think about/remember when trying to design/architect a SQL Server system. Clearly there's got to be a balance between the amount of time you're going to spend on design vs. just "getting it done" but that doesn't mean that NOTHING MATTERS or that you can just do anything with a table and "SQL Server will just 'handle' it." OK, I take that back – SQL Server won't have a choice other than to "just handle it" but some things it just CANNOT handle well. Perform and scalability will suffer and again, your application may fail.

One of the resounding principles of my session (and most of my design talks in general ;-), is that design matters. In fact, in my summary, I said that 3 things really matter in terms of good SQL Server database design/archictecture:

  1. Know your data – this helps you make the right decisions in terms of data types/nullability and churn helps with long term maintenance goals (and initial maintenance plans) 
  2. Know your workload – this helps you make the right decisions about locking/isolation, optimizing procedural code and indexing strategies (and these are the KEY to a FAST and scalable system)
  3. Know how SQL Server works  – this is the one that's often overlooked. And, without information such as "the primary key is enforced by a clustered index and the clustering key is added to ALL nonclustered indexes" then you may inadvertently create a database that grows faster and larger than anticipated where performance slows to a crawl and even management/maintenance becomes a [really HUGE] challenge.

So, while I could go on for ages here I just want to expand on that last point: Know how SQL Server works. Specifically, I want to tie together the PK -> CL Key -> NC Indexes along with the "disk space is cheap" mantra that I also hear ALL THE TIME.

OK – so let's break this down a bit… No matter what your clustered index is – it should be narrow. I do not choose my clustered index for range queries and my choice for the clustering key is NEVER accidental.

Why – because it has a HUGE impact on overall performance. To prove the point (and this was the concept around which my session focused), I created 3 different versions of the SAME "Sales" database. I wanted to show ALL of the impacts of a poorly chosen key – both as CL and really just as a size issue. It's only 12 more bytes than an int, right? What harm can it cause… just wait!

So – to start, I loaded all three databases with roughly 6.7 million rows… and, I made sure everything was clean and contigious so that I'd have the same starting point for all of the tables. I actually strategically started things in one filegroup and then moved things over to another filegroup with 2 files so that I could get some benefits from having multiple files as well (see Paul's excellent post on why a RW filegroup should generally have 2-4 files here: Benchmarking: do multiple data files make a difference?). So, at the initial start I have three databases:

SalesDBInts (inital size with Sales at 6.7 million rows = 334MB):

  • Customers – has an ever-increasing identity (int) PK (4 bytes)
  • Employees – has an ever-increasing identity (int) PK (4 bytes)
  • Products – has an ever-increasing identity (int) PK  (4 bytes)
  • Sales – has an ever-increasing identity (int) PK and FKs to Customers, Employees and Products (row size = 27 bytes)

SalesDBGUIDs (inital size with Sales at 6.7 million rows = 1000MB):

  • Customers – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Employees – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Products – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Sales – has a randomly generated (using the NEWID() function) GUID PK (16 bytes) and FKs to Customers, Employees and Products (row size 75 bytes)

SalesDBSeqGUIDs (inital size with Sales at 6.7 million rows = 961MB):

  • Customers – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Employees – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Products – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Sales – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes) and FKs to Customers, Employees and Products (row size 75 bytes)

OK, so here's where the session really starts… I run 10K inserts into the Sales table in each database and then I check and see what happens:

  • 10K rows in SalesDBInts takes 00:17 seconds
  • 10K rows in SalesDBGUIDs takes 05:07 minutes
  • 10K rows in SalesDBSeqGUIDs takes 01:13 minutes

This is already SCARY and should go down into the "Are you kidding me category?" but I also have to add that the hardware and setup for these first few tests are just highlighting a whole myriad of problems. First, I was running with a somewhat crummy setup – a dual-core laptop with only 3GB of memory and this database was on an external USB drive. Certainly not enterprise storage but also not an enterprise size either. For the size of the db we should have been able to do better… wait, we did – with the int-based database things went really well. Only the other two really stunk and the sequential GUID based database definitely faired better than the random (of course – fragmentation, right?). And, yes, that's a part of it… but there's more. And, I thought… no, this can't be right. Let me try again… run 2:

  • 10K more rows in SalesDBInts takes 00:24 seconds
  • 10K more rows in SalesDBGUIDs takes 07:21 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:10 minutes

Well, that seems pretty consistent but wow – the random GUID db is really NOT fairing very well… let's try it again:

  • 10K more rows in SalesDBInts takes 00:26 seconds
  • 10K more rows in SalesDBGUIDs takes 10:10 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:12 minutes

OK, so you have GOT to be wondering why things are going so horribly wrong? The fragmentation is leading to more page IOs and those also have be put in cache so we're needing a larger and larger cache to handle our GUID database… none of this is good and means you need bigger machines and/or something else to help you out. With the ever-increasing patterns created by the other database we're requiring fewer pages to be read and fewer pages to be cached – these databases are performing somewhat consistently…

OK – so what can we do… let's try FIRST dealing with the fragmentation. To keep it simple, I went to the Books Online for sys.dm_db_index_physical_stats – example D. Using sys.dm_db_index_physical_stats in a script to rebuild or reorganize indexes. This is pretty good but since these databases had never seen a REBUILD (and definitely not a FILLFACTOR setting, I had to tweak the script slightly to include a generic 90% fillfactor). Here's the line that I modified:

SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD WITH (FILLFACTOR = 90)'

I ran this in ALL three databases but there wasn't much to do in any of them except for the GUID-based database:

SalesDBInts (5 seconds)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)

SalesDBGUIDs (7:51 minutes)
Executed: ALTER INDEX [SalesPK] ON [dbo].[Sales] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [IX_SalesToProductsFK] ON [dbo].[Sales] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [CustomersPK] ON [dbo].[Customers] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)

SalesDBSeqGUIDs (9 seconds)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [CustomersPK] ON [dbo].[Customers] REBUILD WITH (FILLFACTOR = 90)

Then, I ran my inserts again…

  • 10K more rows in SalesDBInts takes 00:25 seconds
  • 10K more rows in SalesDBGUIDs takes 05:05 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:07 minutes

OK, that's better… and, it will be more consistent because of the reduction in splits which also helps to keep the table smaller and therefore requires both less disk space as well as less cache. OK, but, there's just still absolutely NO comparison between the sequential and the random, eh? Maybe we need more disk space, faster drives and/or more memory… let's try killing our problem with iron (yes, an oldie but a goodie!). And, yes, this is going to help…

The first hardware change that I made was that I moved these to an internal SSD drive… and, ran my test again. Let's get rid of all the random IO problems. That's got to help, eh?

  • 10K more rows in SalesDBInts takes 00:04 seconds
  • 10K more rows in SalesDBGUIDs takes 01:15 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:02 minutes

WOW – THAT's AWESOME… killing it with iron brings it VERY close to the speed of the Sequential GUIDs as we're completely eliminating the random IOs. This makes our backups faster, etc. but it still doesn't reduce the memory required because of the pages that are going to be required on insert. And, if you have a very large table with a lot of historical data that wouldn't have otherwise needed to be brought into cache this is a BIG problem especially for much larger tables.

I had quite a few more stuff in my demos but it brings us to a really good point… what are our options and what should we consider? First and foremost, how much control do you have? Did you design/architect your system and if so – how much work are you willing to put into it from here? Or, is this an application over which you have no control? Let's take the obvious…

If this is an application over which you have no control then you really have only 2 options:

  1. MAINTENANCE (with a good FILLFACTOR)
  2. Consider killing it with iron where the most obvious gains are going to be disk IOs (ie. SSDs for the data portion) and memory…

If this is a system over which you do have control… then, I'd suggest changing the CL key at a minimum. Then, I'd make sure you have good maintenance setup for your nonclustered indexes because those will most certainly be fragmented. Then, I'd slow consider changing over your FKs to use the CL key (identity ideally) and then maybe – eventually – you can remove those GUIDs altogether. But, this is NOT an easy thing to do…

If your CL key is a PK then here are your steps:

  1. Take the database offline (sorry, I’m just the messenger!!)
  2. Disable the FKs
  3. Disable the nonclustered indexes
  4. Drop the clustered PK (alter table)
  5. Optionally, add an identity column?
  6. Create the new clustered index
  7. Create the PK as nonclustered
  8. Enable the nonclustered indexes (alter index…rebuild)
  9. Enable the FKs with CHECK (this is very important)
  10. Bring the database online

And, there are certainly derivitives of this but the long story short is that it's going to be a painful process. And, I know some of you have work to do… so, I'll end this post here as well as give you a few links to help you continue learning about these issues. The more you know about SQL Server the better your database design and the longer your database will be healthy and scalable.

I've spoken about this in many posts and many of our SQLMag Q&A posts:

Can I actually end this post with – enjoy! (remember, I'm just the messenger!! :)
kt

24 thoughts on “Disk space is cheap…

  1. Kimberly,

    Nice post. I need to pass this to all of the devs here.

    What method do you and Paul use to do mass inserts quickly? Is it as simple as INSERT … GO 10000?

    Thanks,
    Eric

  2. Design is important. I say things like "disk space is cheap" a lot, but that’s because it is. On the flip side, disk performance is not cheap – a poorly designed app will cost you a lot of time and money in performance tuning and hardware purchases.

  3. Kimberly,

    What a great post!

    In some environments, having GUID PK’s can be handy.

    For those cases, we use your last suggestion of having an Int/Identity as the CLKey, which forms part of the physical database model (i.e., not the logical model) with the PK as a GUID.

    Whilst not as bad as having a GUID/Seq Key as the PK and CL Key, in your opinion, how much worse is this i then having the CL Key and PK as a Int/Identity?

    Thanks,
    Dave

  4. Hey there David – The good news is that at least your tables won’t be horribly fragmented AND your nonclustered index overhead will be reduced. So, that’s probably best. The only negative with a GUID PK is the size of the FKs in the related tables. Sometimes it’s better to use the PK (nonclustered) for the referencial integrity even if your main table (and point of entry, per se) is a keyed with a GUID.

    But, that’s a MUCH better place than *many*!!

    Cheers,
    kt

  5. I am not a fan of GUIDs, but this doesn’t look like a fair test.
    1. The invention of sequential GUIDS should never have happened. Your examples admirably showcase the use of int identity to get fast performance from a sequential CL index. If you want sequential interts I totally agree with using int over GUID.
    2. The point of GUIDs is to allow multiple spids to request an insert at the same time without generating a hot spot. Your 10K insert process sounds like a single spid execution, a totally invalid test. Having 100 spids hammering at the same table at the same time would be closer to simulating a real world situation. It would be difficult to do this on SSM, so cut down the test to 10 query windows delay each query to a specific time of day, then loop 1000 times inserting 1 record in the loop.
    3. I agree, knowing the SQL process and how it impacts your design is very important, so ignoring the design of GUIDs and starting with 90% fill factor is totally sabatoging good GUID design. (Yes, I believe it is possible to have good GUID design. It is designed to work in sparce table pages) Having greater than 60% fill factor is asking to get your performance killed by creating substantial page splits throughout your guid table test while a sequential CL is designed to work well at 90% fill factor.
    4. You have a typo in your example. I don’t remember the function you execute to get sequential GUIDS, but it is definitely NOT NEWID().
    4.

  6. Hey Kimberly..

    that’s weird, but I can’t reproduce the same results as yours.. I tried to create a test environment, but with only 1 million rows, instead of 6.7M.. I can clearly see that my clustered index in the Int DB has 5548 pages and the sequential guid has 11941 pages.. but when trying to insert 10K rows I can see only a 18% increase in the insert time.. 14 seconds in the Int DB against 16 seconds in the sequential guid DB..
    in the random GUID everything is behaving "as expected", with 3 minutes for the insert, 1350% more than the Int DB..
    the sequential guid doesn’t introduce the fragmentation that the random guid introduces, so I can’t understand where you’re loosing so much time in your tests, all this time wasted only allocating the extra pages?
    can you describe better the DB schema/indexes of your tests?

    cheers,

    Felipe Ferreira – @SQLBoy

  7. Hey there Felipe – Yes, I didn’t post every aspect of my environment here because I was just trying to give a summary. Everything about the hardware was the same in every test but what your test might be missing are the appropriate secondary indexes? I had a clustered plus two nonclustered. I explained a lot more about this during the session… And, I used different types of hardware for various stages of the tests, etc.

    But, yes, everything was the same for every level of the test. Different hardware will definitely result in different numbers…

    Cheers!
    kt

  8. Hey there Ken – Answers by number.

    1) There are situations where people insist that they MUST use a GUID for those, there’s the newsequentialID function.
    2) Actually, NO that’s not the point. And, hot spots are NOT bad. They used to be bad when SQL Server had page level locking but having a hot spot is actually GOOD now in that the pages that you need will already be in cache and the amount of cache that you’ll need will be significantly reduced. The "hot spots are bad" logic and the idea of reducing hot spots is an old concept and doesn’t apply to SQL Server 7.0 and higher.
    3) I completely and totally disagree. And, in fact, I know this for certain. In all of my clients where they are forced to deal with GUIDs in applications that they cannot change – the resounding and ONLY successful way to reduce the performance problems caused by it is to use FILLFACTOR.
    4) Yes, that was a cut/paste problem. It’s newsequentialid(). Thanks.

    Cheers,
    kt

  9. I just came across this article as I was about to write one in favor of Guids and I had to smile. The comparison of an int and Guid in this particular scenario really isn’t comparing apples to apples. I say this for a couple of reasons. First of all, let me use a non-database example. If you were going to build a house out of wood, you would build it one way. If you were going to build a house out of concrete, however, you would have a totally different way to create your house. In both cases you will end up with a house, but the process will be absolutely different.

    When you build a database and you use Guids, you have to change some of the underlying ways that you manipulate your database. In other words, you have to optimize some things and change some things. Not optimizing your setup for guids is the real problem. It is the difference between building a wood house and building a concrete house – your concerns as you start creating your home are completely different. So in order to compare apples to apples, you need to optimize your database for large scale guid use and THEN make a comparison over a RANGE of actions – because in the real world, you have a range of things going on. Ints can turn into a huge nightmare if you don’t know what you are doing and if they are not right for your setup AND so can Guids. The KEY is to make sure you understand WHAT YOU NEED TO DO to use Guids within your system.

  10. Hey there Kila – I get your point and your conclusion is right – that you have to have a REAL reason to use GUIDs. And, I’ve said this before, I don’t have a problem with GUIDs in general but I have realy problems with folks that just choose GUIDs as their PK for all of their tables because they don’t want to deal with the client code.

    And, another problem is that there really isn’t a good way to optimize GUIDs except for what I’ve already said – do NOT make them the clustering key make sure that they’re a nonclustered and then consider using them as a natural key.

    The long story short is that they are VERY RARELY used correctly and in almost every case I’ve EVER seen them they have been abused. And, I could have come up with better ways to have done what they are doing that would have performed A LOT better even in distributed environments.

    Cheers!
    kt

  11. Hey Kimberly, thanks for your reply.

    Can you please explain what you mean by "Sometimes it’s better to use the PK (nonclustered) for the referencial integrity even if your main table (and point of entry, per se) is a keyed with a GUID."

    Are you saying that the PK and FK’s would be int’s (from a physical model perspective), however from the logical model perspective, we still "access" the tables from application code via their GUIDs?

    On a related note, when using a system built on GUID PK’s & clustering on an int/identity; An OO based app will do a lot of retrieval of single rows to populate a single instance of an object. To do this, the app is going to do the equivalent of a SELECT * FROM table (obviously specifying the column names explicitly). If the clustering key is not the PK, then you either need an index covering the entire table (or use include columns for everything) or you need to take the bookmark lookup hit. Is there another option?

    In general, entire subject of strategies to deal with systems build on PK GUID’s is one that isn’t covered much by the experts. Most of the advice is simply "don’t do it", which does not really help with existing systems :)

    Thanks again,
    Dave

  12. Hi Kim,

    The app I’m currently working on has every primary and foreign key as a sequential GUIDs. There’s also an identity column on each table, but it’s not used in the client code at all. All queries in there reference the GUIDs. While I understand that clustering on the identity column would reduce the size of my indexes/DB and thus reduce IO, could you expand on having a clustering key that is never used in your queries. My understanding is that the clustered index would never be used outside or bookmark lookups, which I’d want to avoid in the first place by creating covering noclustered indexes (which will most likely take up the space I saved by going from guid to an int clustering key). Of course there is the tedious option of refactoring the client code to use the identity column in all join, where, order by and whatnot, but that’s a totally different ball game.

    cheers,

    Tof

  13. For those of you who want to reproduce my results – this is difficult unless you have similar hardware for each level of testing/setup. But, I’ll give you better insight here. I had 4 tables – customers, products, sales and employees. Sales had a foreign key to each and each fk column was indexed (as they generally should be). Each table in all 3 databases was created the same way in that EACH table used either an int, GUID or sequential GUID as their PK (and therefore the FK and therefore the indexes). The compressed backups of these databases are 60MB (ints), 100MB (SequentialGUIDs) and 600MB (random GUIDs – this is so much larger because of both fragmentation and lower compression because of the random GUIDs). I started each database out with 6.17 million rows and with the clustered indexes rebuilt with the default fillfactor of 100% (which was part of the point of the post) and for the first couple of tests I ran on an external USB drive and crummy hardware (that was improved later in the tests and the posts). And, I demo’ed this on stage during my session.

    Anyway, I can reproduce all of these numbers repeatedly. I’m sorry that you can’t. And, even more importantly I’ve seen these results OVER AND OVER AND OVER again at my clients. I’m sorry that people WANT to use GUIDs and I get that there are places where they do make sense BUT when they’re used as a key for every table just because people don’t want to deal with any other type of key – that’s a problem and it shows itself in MANY MANY ways…

    I’ll work to address some of the other questions in a few mins…

    Cheers!
    kt

  14. @David-Please don’t confuse indexing strategies and optimizing range-based queries and optimizing the clustering key. Making your queries efficient and reducing bookmark lookups is ABSOLUTELY a goal in tuning. However, inefficient clustering keys bloat nonclustereds and so that can have a profound effect on performance.

    As for the schema/model. Imagine that the app regularly accesses a table with a GUID (which is unique and therefore a singleton lookup) – that table can easily handle the lookup with the nonclustered index. However, for all of the related tables, instead of using the PK they use the identity and all of the joins are based on that identity instead of the GUID. Especially when the users wouldn’t need to know those key values or ever directly access them… using the ints has benefits as it’s smaller, could be the clustered key – could be the FKs and all of the indexes would be more efficient and the tables would be less fragmented.

    Hope this helps!!
    kt

  15. Great post. I was checking constantly this blog and I am impressed!
    Extremely helpful info specially the last part :) I
    care for such information much. I was looking for this particular information for a long time.
    Thank you and best of luck.

  16. Hi Kimberly.
    Since this blog topic dealt with Disk Storage, I thought of posting my question here. From SQL Server performance point of view, does it make any difference if a variable character column such as NVARCHAR is defined very wide? For instance, let’s assume I have a table containing an NVARCHAR(50) column. However, the column never contains more than 10 characters of data in any row in the table. Would SQL Server perform any better if I were to choose NVARCHAR(10) as the data type for the column instead?

    Thanks.
    – Nikhil.

    1. Hey there Nikhil – No, the internal storage for an NVARCHAR(10) and an NVARCHAR(50) (or an NVARCHAR(4000) for that matter) is the same. These columns only consume the amount of space of the value. I would argue that the only issue between them is data integrity (and correctly limiting the columns based on what your data needs).

      Hope that helps!
      Kimberly

  17. Hi Kim,

    Fantastic investigation regarding the cost of using Guids when not needed.

    You stated <>

    In the case of keeping sql server and Mobile phone clients datastores synchronized, what would you have done that was a lot better?

    WOuld you have suggested one of the following, or do you have another solution altogether?
    * http://blogs.msdn.com/b/sqlazure/archive/2010/07/15/10038656.aspx?Redirected=true

    Of the ones presented by Wayne Walter Berry I find it hard to find a clear-cut winner:

    * from the above, using SQLServer sequential identities looks acceptable if a) replication is used b) RFC 4122 compliance is not required c) being twice as slow as an int is ok (it genereally is) d) if one could find code for generating replacement, non-clashing sequential id on a non-microsoft platform that is highly improbably to not clash (I haven’t found that).
    * Composite MachineId/TableId keys appears doable…but hugely impractical.
    * Pregenerated Keys/Central store would end up impractical for mobile development.
    * using bigint and offset identities per db sounds the most natural to implement — until the price came in as not being able to use core db replication/Data Sync Service.
    * Did you have other options?

    Regards — and thanks for investing the time for a great summary of the impact of uncessesary use of Guids.

    1. Yes. Actually, I like the “double column Primary Key” that the article suggests and that you list as composite machineid/tableid. This is generally what I prefer. However, I’m not sure why you think it’s impractical? I think it’s because you have so many environments where you’re generating these? And, in the end, the machine id would be the SERVER IFF you were inserting into multiple servers on the backend. I wouldn’t use the client’s machine id.

      The id is really this – you want to scale out and/or support P2P and you have “multiple masters” instead of using a guid you have:
      Server 1, row id 1
      Server 1, row id 2
      Server 1, row id 3
      etc.
      Server 2, row id 1
      Server 2, row id 2
      etc.

      If you ever add a server, you’re fine, etc. and the number of “buckets” is manageable.

      hth,
      kt

  18. Oops.

    The quoted part that was removed from my comment was

    ” I could have come up with better ways to have done what they are doing that would have performed A LOT better even in distributed environments”

    Regards.

  19. Hello Kimberly.

    I think this is a great article and thanks for the insight.

    Do you think the test would be improved by using a bigint verus an int?

    Part of the reason why a GUID is used is the amount of unique values a GUID can contain is way beyond 2.1 Billion.
    Even if someone got creative and started with a negative value (-2,147,483,648) the max amount of values the int can contains around 4.2 billion unique values.

    In your example above the inserts on the table will fail when you reach the int limit. By using a bigint i think its more of an apples to apples comparison — although a bigint is still more limited than a guid (in terms of how many unique values it can contain).

    Working for a larger company we have exceed the int limit on many occasions.

    just a thought

    Laurence

    1. Bigint is fine and often preferred – as you mention – if you think you’re going to get anywhere near the limit of 2.1 billion rows. So, yes, I’d absolutely go with bigint instead of an int. In fact, I usually tell folks that if they even think they’re going to get to 1 billion – then just define it as a bigint and don’t even think about it. I’d rather waste a little bit of space upfront than have to change it later.

      hth,
      k

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Wow! Wow! Wow! THANK YOU!

I announced my retirement from SQL/tech here and your comments on my blog, on LinkedIn, and on Facebook were overwhelming and humbling! I’m so touched

Explore

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.