Is a million writes/sec a large number? I guess it’s all relative, but I’d consider it a fairly large number. Our peak transfers for SQLServerCentral are in the hundreds/sec, though we’re not really a busy database server. Some of you might have much higher numbers, and if you’re in the 100,000/sec and can let us know, I’m sure others would be interested in hearing about your experiences.
I ran across a piece on the Uber database infrastructure that I found impressive. No, Uber doesn’t use SQL Server (they use Cassandra), but they have worked to build a large scale infrastructure. Their goals:
- 1 in 100,000 requests can fail
- 1,000,000 writes/sec
- > 100,000 reads/sec
Those are quite impressive. While I’m not sure they’ve ever achieved these levels in production, I’m glad they’re testing at these points. I think far too many people forget to test the limits of where there systems might grow and only stick with where they are today. Or where they were a month ago when they refreshed a test environment from production. Test at larger than production levels, at least once in awhile.
There’s something impressive with one million. Getting to a MB, roughly 1mm bytes, was impressive to me. Not such a big deal now (with pictures requiring > 1MB), but 1mm MB is a terabyte, and while I carry that in my pocket, it’s still an impressive size. Crossing one million members at SQLServerCentral was impressive. I think $1mm is a lot of money. One in a million still seems like a very small chance of an event. At the recent Data Science Summit, we see SQL Server scoring over 1mm fraud predictions/sec.
Achieving 1mm of anything in a database system is still a large number. I know many people have tables with over a billion rows, but I’d still say a million is large. Perhaps you disagree, but I’m still a little awed at seeing SQL Server process a query of 1mm rows in less than a second.