One of the things I’ll see happen often with SQL Server instances is that the system will run out of space on a drive. This could be for a variety of reasons, some of which can be prevented, and some cannot. You might have:
- Don’t delete old backup files
- Data growth fills the disk over time, usually years
- tempdb rapid growth that uses all space
- old import files not deleted over time
There are other reasons, but I’ve often found that some process will cause an emergency and the SQL Server stops working, or stops backing up database, and administrators are in a panic to free space so the server can continue to function.
Here’s what I suggest to smooth the way with a series of placeholders and a job.
First, create a folder on your SQL Server (or really every server) called Placeholder. I’d put it in the root to make it easy to find and standardize on it.
In the folder, place a series of files to save space. If you don’t know how to do this, I can show you an easy way. I have 4GB reserved here.
Now create a SQL Server Agent job. I might standardize this on every server I have with the same name and path.
The job has one step, which is designed to delete one file, each time it’s run.
Note that I had a slight bug in what I shot above. I had the contig.exe utility in the folder and the first execution of the job deleted that file. Not a big deal in an emergency, because I can run the job again, but I’d make sure that only the place holder files are in this folder on machines.
Here’s the job. It’s a PoSh type of step.
The actual PoSh code is here:
$fileEntries = [IO.Directory]::GetFiles(“d:\placeholder”);
$delete = 1;
foreach($fileName in $fileEntries)
if ($delete -eq 1)
$delete = 0;
When I run this, each time I run it, it’s just a single click or sp_start_job call.
After it runs, I have 1GB more free space. If I need more, run it again.
However, once you clear your low space condition, I’d be sure I put the placeholders back.
For the next emergency.