In a discussion recently someone mentioned that they built in timing mechanisms into their application so they were aware of how long certain modules or functions were executing. Using this data, they could easily determine if the system was performing poorly by comparing the average of timings to the current performance.
I thought that was a great idea, and it shows some proactive, forward thinking on the part of the developer. I haven’t often seen this in SQL Server, though it does seem that many good DBAs maintain some type of overall baseline for their database servers in order to help them response to performance issues.
For a Friday poll, I was curious how many of you actually think about monitoring your systems. I decided to ask this in a general way to get an idea of what others might be doing inside SQL Server as well as in the applications that connect to SQL Server.
Do you build instrumentation into your applications and code to measure the performance?
I’m wondering if you actually collect, log, and even analyze data on how long processes take, or how much data you work with. Do you capture row counts from data loads or processing? Do you maintain an average time for code to execute?
I haven’t done a lot of this in code. There was one application which watched for files and loaded them every day. We built code into SQL Server that tracked how long the load took, and how many rows were processed, mostly as a way to prepare for future hardware upgrades. As a production DBA, I have typically kept an average on size and time of backups for the same reason. It helps me capacity plan, and alerts me to growing loads for the server.
If you do anything differently, or if you’ve found this to be valuable, or even a waste of time, let us know this Friday.