How many records can I store in a Sql server table before becoming ugly?

advertisements

i've been asked to do some performance tests for a new system. It is only just running with a few client, but as they expect to grow, these are the numbers i work with for my test:

200 clients, 4 years of data, and the data changes per.... 5 minutes. So for every 5 minutes for every client there is 1 record. That means 365*24*12 = 105.000 records per client per year, that means 80 milion records for my test. It has one FK to another table, one PK (uniqueidentifier) and one index on the clientID.

Is this something SqlServer laughs about because it isn't scaring him, is this getting too much for one quad core 8 GB machine, is this on the edge, or.....

Has anybody had any experience with these kind of numbers?


Field PK should be as small as possible and not be random - GUID sucks here. The main problems are:

  • The PK is used in all foreign keys to reference the row, so a large PK uses more space ?= more IO.
  • A random PK means inserts happen all over the place = many page splits = inefficient index usage.

How bad is that? I know in some scenarios you lose 80% speed there.

Otherwise - no problem. I have a table in excess to 800 million rows, and things are super fast there ;) Naturally you need to have decent queries, decent indices and obviously that does not run on a single 5400 RPM green hard disc to be efficient - but given proper IO and not stupid queries and some decent indices, SQL does not bulk on a couple of billion rows.

So, while "it depends", the generic answer is that large tables are not a problem... ...unless you do MASS deletes. Deleting half the table will be a HUGE transaction, which is why partitioning is nice for stuff like accounting - one partition table per year means I can get rid of a year data without a DELETE statement ;)