Autogrow guidelines

I always enable autogrow on my databases. However, it’s there for emergencies, not as a space management tool. I monitor disk space, and I grow my files manually as space runs low. I want to control the growth, but in the event of a runaway process or some unexpected event, I want autogrow enabled to hopefully prevent a database crash.

What level of autogrow do you enable? That was a question I saw recently and it made me stop and think a bit. I asked the question on Twitter, but got very few responses and no real guidance from others.

In my mind, you want to enable a specific level of autogrow that will be likely to handle something unexpected in a single, or maybe a couple growths.

The two options for autogrowth are:

  • percentage
  • fixed size growth

In general I think a fixed size growth is the setting to choose. This offers more control and as your database size grows, it’s unlikely you want to grow at a percentage. A 10% growth of a 100MB database is 10MB, almost a rounding error on many of today’s drives.

However a 10% growth on a 2TB database is 200GB, which could easily exceed the free space on the drives that make up a file,which is where you set the autogrow specifications:

autogrow1

As the file size increases, the disk space goes down, but the percentage growth goes up as well, exacerbating the problem.

So what are the guidelines?

I think that you have to look at the data growth and each database and what is possible for data growth in that database. Some some database that does ETL work I think it’s more likely you could have an unexpected large or duplicate import in a process that would cause larger growth.

However the question I was asked talked about setting a guideline for hundreds of databases, where in the short term it’s impractical to review every database. For those, I offer up these guidelines, though I’m happy to have someone give better guidance.

MDF Size File Autogrowth
< 1GB 100MB
1GB < mdf size < 50GB 2GB
50GB < mdf size < 200GB 5GB
200GB < mdf < 1TB 10GB
> 1TB 50GB

 

These are guesses on my part, based on some experience, but the largest database I ever managed was a 600GB one, so I’m guessing on the TB scale.

I’d welcome your comments and experience in this area.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Blog and tagged , , . Bookmark the permalink.

2 Responses to Autogrow guidelines

  1. John Halunen says:

    My experience has been that for those larger size data files that growth setting may be a bit big. We had multiple heavily used reporting servers with a 3.5TB Fact db and a 2.5TB Dim db and had issues with any autogrowth option setting (we tried settings from 128MB to 4GB), so ended up turning off autogrowth and monitored with scripts/jobs. We had the advantage of having an offline load process (db’s behind VIP’s). That said, in the other less busy large db’s (similar size) that I’ve managed we usually ended up setting the autogrowth on the datafiles to 2 or 4GB. So much of this comes down to how well you monitor the data growth so that the autogrowth doesn’t get used when the systems are busy. Or maybe I just needed faster SAN 🙂
    IIRC there is/was a bug in SQL at exactly 8gb of autogrowth?

    Like

  2. way0utwest says:

    You absolutely should be monitoring, but in an emergency, even on a pager, you might not respond in time in many environments. If you have 24 hour support, that’s one thing, but the vast majority of companies don’t.

    I thought the 4GB bug was for log growth, but I’ll have to check on an 8GB one. The 4GB one mentioned here: http://sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx

    Like

Comments are closed.