Just use Version Control

There are free systems out there. If you have no budget, and want to get started, download one of these:

There are good ones to pay for as well, and some of them have other features and integration you may like:

There are plenty more. Please, please, please, start using Version Control. There are compelling reasons from many successful developers, including those that want your database code under control.

There are ways to do this, but please pick one. Version control is a must for professional software developers. That means if you write code, and get paid for it, use version control. If you write code, and don’t get paid for it, value your time and use VCS anyway.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Blog and tagged , , . Bookmark the permalink.

16 Responses to Just use Version Control

  1. Let me start by saying that I agree. EVERYONE SHOULD USE VERSION CONTROL.

    But in my opinion that is only half the problem. There are many different flavors of version control. To illustrate my point lets pic subversion and TFS. TFS and SSDT will create a copy of your database, well a creation script for each object. This is great because you can see what each object in your database looks like and who was the last to change it etc. SVN to my knowledge does not have a method of keeping your database version controlled automatically unless you count the RedGate tooling, which is also great. This kind of keeps a copy of your database at each version but tries to also keep a migration script to get from one version to the other. Kind of being the operative word. At the time of writing I would say that it does about 80% of everything necessary to actually use the product reliably.

    We use SVN to hold migration scripts that we generate by hand and this works perfectly for us, but it is labor intensive and it does not give you a snapshot of your database at a object level, unless you use a separate file for each db object. I recently evaluated the RedGate, ReadyRoll and a few other tooling options and have used TFS and SSDT in the past.

    I would love for there to be a better way to do this and maybe there is…

    Like

    • way0utwest says:

      I agree that it’s not simple. However it’s a process more than it’s tools.

      However I’m not sure what you mean about SQL Source Control (a Red Gate Product). It doesn’t track version to version anymore. It used to, and if you have this functionality, you need to upgrade. Right now it stores the creation scripts and runs our compare technology to generate the differential script to get from VCS to any version.

      Point in time compare technology certainly has holes (renames, a few others) where you need some migration script. The only way around this is you perform a lot of manual work to track all changes. Both work, but both are work.

      Part of the issue with many shops (perhaps yours, perhaps not) is that process becomes too hard to change for various reasons. When this occurs, you often can’t take advantage of some of the things your tooling can offer. Process needs to be a bit flexible, though to be sure the tools could improve as well.

      Like

    • paschott says:

      I agree that using some sort of version control is essential. I’m not sure why you’d use SVN to store a script, but not store a dacpac. Storing the dacpac would give you that version of your entire schema in one file and you could even use that to push that specific version or just diff against it. If you generate snapshots from your SSDT projects, you can even just store all of those in your version control system. It’s more work, but you’d have those differences. From what I remember of SVN, you still have some kind of branching strategy so could easily put your SSDT projects or Red Gate projects into an SVN repo and then maintain your projects/files within the repo.

      We chose Git for our VCS – primarily because we could work in a disconnected fashion much more easily. That fit our style. We’re using SSDT, but admittedly that’s because we started with DB Projects back in VS 2008 and have kept using them. There are some annoyances but it works. I tried Red Gate’s SQL Source Control and it didn’t quite work as well for me. I can see the value, though.

      Like

  2. Megan Brooks says:

    I agree. I used to do schema source control the hard way, creating scripts and then checking them in to SourceSafe. Older versions of SQL Server had some way of versioning stored procedures, but I never found that useful.

    A few years ago I started using the newer Database Project versioning features in Visual Studio. At the time, you needed Visual Studio Premium in order to do a schema compare, which is often essential to tracking and checking in changes with minimum effort, so I dutifully upgraded.

    Since then, Schema Compare has become a part of SQL Server Data Tools and, if you don’t have something better that you prefer, there is no longer an excuse not to use it, other than the sometimes painful learning curve. But it’s your choice — painful learning curve or potential very painful database upgrade disaster.

    I know that a lot of corporate environments are running “legacy” versions of SQL Server, and I don’t know if they would be able to use SSDT. If not, there’s always script-and-check-in (and possibly upgrade your environment). But if you are able to use SSDT then it is pretty simple to track changes in a database (ignoring, of course, all the weird cases that sometimes pop up). You create a database project, import the current database schema, and check it into version control. When you make changes to the schema (hopefully somewhere in your development environment), you perform a schema compare against the project, update the project with the changes that are found, and check it in again.

    When you are ready to deploy, you do a schema compare between the database project and production, generate an update script, go through it carefully, tweaking as necessary, back up the database, and run the script. If you need to back out a set of schema changes, you repeat the same process using an earlier version of the database project retrieved from version control.

    If you use SSDT to track changes in multiple related databases, it can be messier. You need to create database references for the other databases, and you need to watch out for the default database variable name that SSDT defines for you when you create a reference — if you don’t blank it out then the reference typically will not work for you the way you would want. (See http://far2go.net/2013/05/07/cross-database-references-ssdt-database-projects/)

    Like

  3. way0utwest says:

    You need scripts because a DACPAC won’t necessarily have all the information. What if you renamed an object? When you compare two DACPACs, how do you determine that one has a renamed object v a delete and a new one?

    There are a few issues where point in time DACPACs don’t necessarily work. There’s also the issue or tracking just a change or two. Why restore the entire schema? That’s almost like zipping up your entire c# project and time stamping it, and then diff’ing the entire thing. What if I want to roll back or undo a single change?

    Like

    • paschott says:

      Hmm, We’ve always used the Refactor Log in the latest versions to see if something was renamed, but then we don’t store dacpacs in source control – we store the whole project and its files so we can just look at the logs for the files without too much trouble. I agree that storing the dacpacs in source control is not the best practice by any means, but the dacpac contains everything I need to recreate the database at that point in time and would be smaller than a whole create script, though likely larger than just a diff script.

      I was more confused by not storing the actual SSDT project in source control, with all of its files being versioned. It sounded like people were just storing their diff scripts in source control and that seems less valuable to me (though better than nothing).

      Like

  4. way0utwest says:

    We don’t store diff scripts for most objects. Only for the cases that don’t work. Refactor logs might work, but that seems like it’s not the best way, especially in non-shared environments. You’d have lots of logs. There are also adding not-null columns to specific tables. Those require a few steps of work. If it works for you, great. I suspect it’s not a good, general practice for most environments.

    The idea is that most objects are stored as a create or alter script and only certain scripts are needed.

    Like

  5. ddsg says:

    This will sound like a religious debate. I started with mercurial but soon dumped it for git and never looked back. Mercurial was like a straight jacket. You will have very little freedom. You have to commit all or nothing. You cannot stage your commits. You cannot install commit message hooks. You cannot commit some files and not others. You cannot commit some parts of a file while leaving other edits of the same file to a different commit. You cannot have have more than two branches at a time.

    Like

    • way0utwest says:

      I can’t speak to how Git v Mercurial compare for various features. I like Git, but I know a number of the Red Gate developers doing C# work prefer Mercurial.

      Ultimately, you just need something that works for you.

      Like

  6. Sam Vanga says:

    For those who write code and don’t get paid for it, it’s worth mentioning visualstudio.com, TFS in the cloud and free for up to 5 users. It takes a few minutes to set up and you don’t have to install anything!

    Like

  7. Prasanna says:

    Hi, Can we tightly integrate TFS with SQL Server Management Studio? Our expectation is developer should not alter the stored procedures without check out first. Thanks

    Like

    • Megan Brooks says:

      You can, using TFS and SSDT, but not (to my knowledge) in a software-enforced way. Don’t have developers work in production. Instead, have them work in a development environment where they can do whatever they need to do. When changes are ready they merge them into an SSDT project and check it in to TFS. A DBA then locks the changes, creates a change script, checks it, and runs it against pre-prod or production as required.

      There are other tools available, and other procedures, but this is a basic process that is supported using only Microsoft components common in SQL Server development environments. My experience is that TFS and SSMS work well together when the required patches are installed.

      I’m not sure of all the implications of using SSDT with older versions of SQL Server (and SSMS), so that is something to watch out for. SSDT version support is discussed at http://blogs.msdn.com/b/wadep/archive/2012/10/18/sql-server-data-tools-amp-vs2010-vs2012-details.aspx and http://msdn.microsoft.com/en-us/data/hh322942

      Note that SSDT, in the context of SQL Server, means the replacement for BIDS (i.e SSIS, SSRS, and SSAS designers) as well as the more basic SSDT package for Visual Studio that does not include these BI features. But what is of interest here is the SSDT ‘database project’ features that support schema versioning. Those features should work with all supported SQL Server versions, although I can’t confirm that from direct experience.

      Like

    • paschott says:

      SSDT works with 2005, 2008, 2008R2, and 2012. There is a new version in the works to support 2014, but not sure if that will be separate or integrated.

      Note that there’s SSDT (schema management) and SSDT-BI – (SSIS, SSAS). Different versions, bad naming convention.

      I’d also recommend some sort of DDL auditing for systems to catch all of those schema changes – just in case.

      Like

  8. Reece Watkins says:

    The version control system you learn first will probably be the one you prefer, but really, any of these tools will get the fundamental job done. I’ve used Subversion and Mercurial, and I’m just starting to explore git a bit, but the bottom line is: ANY source control is better than none at all.

    If you like all your projects and code to be in one large central repository, Subversion may be where you want to start. If you like each project to be its own self-contained entity, look at Mercurial or git. Or try all three, since they’re free. If you like GUI integration, take a look at the open-source “Tortoise” packages for each (TortoiseSVN, TortoiseHg, and TortoiseGit). There are other toolsets with varying features, but these three have somewhat of a common interface so you can feel out which core VCS you prefer. The git and SVN versions require the source control system be installed prior to Tortoise, but TortoiseHg includes the latest version of Mercurial, so everything can be installed from one download. Subversion requires a central server that everyone can access, but git and Mercurial work on individual copies of source repositories that can be shared/pushed/pulled to other users as needed. (This is why distributed systems like Mercurial and git are arguably a better choice for small, individual repositories, while Subversion is a more centralized solution. There’s no “bad” choice, though. There may be one that is “best” for your particular situation, but picking one over the other isn’t really a career-courageous decision to agonize over. The first time one of them saves you from a careless mistake, you’ll want to kiss somebody, anyway!)

    And, of course, there is Red Gate’s own SQL Source Control product to help integrate your chosen VCS with SQL Server Management Studio.

    Like

Comments are closed.