Scaling Up

A petabyte of disk space

For most of my production DBA career, I worked with databases whose size was measured in MB. A few got into the GB range, with the largest dataset I ever managed being 800GB. That was in the days of 35GB and 70GB drives, when SANS were just being deployed widely into companies.

I did have the opportunity in the SQL Server v6.5 days to interview with a few companies that had TB sized databases. One was running a 13TB v6.5 system and when I heard that on the phone interview, I declined to continue the process.

I like a challenge as much as the next person, but at that time I had a 6 year old and an infant son and preferred being able to go home at night and see them. I am not sure how I would approach that challenge today, but I did think this would be a good Friday poll.

Would you want to manage a large scale SQL implementation?

By large scale I am thinking of a 10+TB OLTP system or a 30+TB warehouse or cube. Something that is out of the ordinary in terms of scale, and which would present challenges outside of the systems you currently work with.

My thought is that at this stage of my life, I don’t need the long hours required to deal with some operations on these systems. Simple checks, test restores, and all sorts of maintenance become larger issues when you are dealing with TBs of data. Unless we get substantially faster hardware in the future (in terms of IOPS), I can’t imagine how we will handle PB sized databases.

Let us know if you are up to that challenge today, and if you work on VLDBs, I would be interested in knowing how you like it.

Steve Jones

(originally published at http://www.sqlservercentral.com/articles/Editorial/72151/)

Podcasts:

  1. Video Podcast 20.4MB WMV
  2. Video Podcast – 14.9MB MP4
  3. Audio Podcast – 3.4MB MP3
Unknown's avatar

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged . Bookmark the permalink.