AutoMap in SQL Compare

SQL Compare is a core product from Redgate and I’ve got a series on some of the interesting things I’ve found. Download a trial today if you haven’t tried it.

SQL Compare does a great job of trying to help you map columns in your database. For example, I was testing deployments recently to see how a process might work and I noticed the SQL Compare had automapped columns for me.

This is a feature in SQL Compare that tries to make your development effort a little quicker and easier. As you can see in the image below, there’s a note at the top of the comparison, and then an indicator for the actual object.

2017-12-01 12_37_01-SQL Compare - New project_

If I click the object, I see the code, but I don’t know where the auto mapping has taken place.

2017-12-01 12_37_09-SQL Compare - New project_

In this case, I know because I changed the Quantity column name, but there’s nothing special here about the mapping. SQL Compare just mapped “Quantity” to “Qty” by itself. If I check the project properties, I can see the mapping, and change it if necessary.

If you want to know how to customize this, I’ve got an article on the Redgate Hub on How to Customize Schema Comparisons using Auto Map in SQL Compare.

Automap is one of those useful, handy features of SQL Compare. It gets you moving in development quickly by taking care of some work, but you can easily change things and keep your project handy if you have the need.


Posted in Blog | Tagged , , | Leave a comment

It’s Time for SQL in the City

Today is SQL in the City 2017. Once again we’re streaming the event online, live from Cambridge in the UK. I’ve flown over, along with Grant Fritchey and Kathi Kellenberger to entertain you for the day. We’ll be broadcasting at 11am in the UK for five hours before we take a short break and return for a 1pm EST start of the second broadcast. You can still register and we hope you’ll tune in for a few a session or two, or all of them if you can convince the boss to give you a training day. You won’t know unless you ask.

I’m excited to show off a few things today, as I’m sure Kathi and Grant are as well. Our developers have been hard at work throughout the year, improving our software and adding new features, while fixing a few bugs. It feels like the pace of change is steadying out, with most products releasing changes every two weeks, but we’re adding a lot more exciting and interesting changes. It seems that the products are maturing well, in ways that I didn’t expect.

I’ve been especially excited by the changes in SQL Clone this year. We’ve really  grown this product up, with the team being reponsive to customer requests. It’s exciting to be the one to present some of the changes to you, as well as try to introduce others to the product. This is one of those products that I wish would have been available 15 years ago as I would have saved lots of storage costs as well as gained a tremendous amount of flexibility.

SQL Prompt and SQL Monitor have also had great years, and I’ve been impressed with their growth. Unfortunately I didn’t get the short straw and had to leave those products to my colleagues. I guess I can’t have all the fun. Maybe I’ll try to photobomb my way into one of those sessions.

Presenting at events is exciting, and I’m looking forward to the day. It’s long, but it will still be a fun time. These streaming events are also a neat experience, as we’ve got a bit of a TV studio feel, with multiple cameras, displays, tape on the floor, and more. For someone that is used to sitting at a  desk and working at my pace, I get a little excited by the efforts put into the broadcast from our IT staff. We have various schedules and checklists, and I’ve spent the last couple days rehearsing the talks.

I still like live events, and hope we’ll do a few in the future, but for now, sit back, relax, and enjoy SQL in the City from the comfort of your office.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

T-SQL Tuesday #098–2018 Goals

tsqltuesdayA great topic for T-SQL Tuesday this time from @sqlmal. It’s Setting Learning Goals for 2018, and it’s a topic I both loathe and love. I hate making commitments that I can’t keep, and I constantly find that a chaotic life will get in the way of learning over time.

As an example, I tried reading the Powershell in a Month of Lunches book. I managed to get to Day 19. It’s not that I don’t like PoSh or have given up (I still blog on the topic). It’s that the setup was more complex later in the book (really need a domain) and I became busy at work.

Goals do have a way of focusing a person on a task, and they’re good in that sense. With a year of less travel coming, this was a timely topic for me and forced me to stop and think a bit about the way I want to drive learning in 2018.

Goals for 2018

What do I want to learn next year? Here’s a short (ish) list, in no order:

  • CosmosDB
  • Python
  • Pester
  • Extended Events
  • Entity Framework

I could try to tackle all of these, but that certainly won’t help me do well. This is too large a list to become strong at all of them, though I could spend time improving my skills a little bit in all these areas.

Of these items, I could relate many of these to my job, which can make it hard to choose. In addition to these items, I’ve got other things that I know will be a part of my learning next year. Product changes, new integrations between our products and others, plus who knows what will come about.

Picking Two

If I have to say that I’ll spend 100 hours on something next year and try to get better, I’ll say that the two areas I’ll focus on are Extended Events and Python. While I may dabble in the others, I think that these two areas will be more important to my career as well as my job over time.

I need a much better understanding of how XE works apart from trace, and I need a lot more comfort in gathering and analyzing the data.

I also think that Python, with its addition to SQL Server ML and the popularity of the language in new packages and modules means that this is an area that provides some variety, excitement, and a relation to my job. Plus, it’s in SQL Data Generator, so I should have lots more examples there.

My goals for these two are to spend 50 hours on each over the year, in a combination of course work (articles, videos, etc.) and practical work (building things and trying to use the technology to solve problems).


There’s always something that comes up, and while I don’t want to start making excuses, I recognize that there are a few things that might derail my plan and make me change.

First, GDPR. I have no idea how this will impact Redgate and my, but I might need to spend a bunch of time early in the year on this. We are getting calls, questions, and requests from clients, and as some of our new integrations and products come out, this might eat up time.

Second, travel. I haven’t planned on much, but I’ve really only set the first 4 months of the year. After that, we’ll see. Who knows what might come up or what demands arise.

Third, life at home. I never know what will happen here and how things might change, so that could change how I focus. Plus, I have a sabbatical coming, so we’ll see how that affects me.

Getting Ready

I’m doing a few things right now. First, I’m setting a monthly reminder to update my goal progress. I’ll try to write a post each month, for better or worse, on where I am.

2017-12-06 17_22_15-Appointment Recurrence

Second, I’m setting an appointment for myself to sit down and plan some things out the last week of December. That’s a good time to spend an hour or two and try to come up with a plan. It won’t be perfect, but it will give me a direction on which to start. I can amend this during my monthly review if needed.

That’s it, thanks to Mala for a great topic, and we’ll see how it goes.

Posted in Blog | Tagged , , | Leave a comment

Renaming a Column–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

One of the things I rarely do is rename objects. There are good reasons to do so, but often the changes required in other objects and applications isn’t worth the hassle. This is one reason why it would be good to spend a few minutes in design and come up with good names from the beginning.

In any case, do you know how to do this? You could use SSMS and easily change the design of the table. Of course, SSMS might try to rebuild the table, which might not be what you want. Hopefully that’s not the case, though you should always get the script instead of just saving the change.

The code you’d like to see is a simple meta data change that uses sp_rename. In my case, I want to change Qty to Quantity. I’d use this code:

EXEC sp_rename @objname = ‘Sales.OrderLines.Qty’ ,
@newname = ‘Quantity’ ,
@objtype = ‘column’;

I wish we had a direct ALTER TABLE statement that worked here, or better yet, an ALTER TABLE that allowed the entire table code to be shown (that’s not coming), but I’m not holding onto any hope that Microsoft will change this.

If you’re a person that thinks you might need a temp table that you insert data into and then two renames of the tables, that’s not the best way. Simple meta data changes are always preferred.


I ran into this while helping someone test a change and thought this was a good, easy reminder of how to change names. You could show you understand this in a blog in five minutes.

Posted in Blog | Tagged , , | Leave a comment

Sharing Data

One of the things that’s written into various computer laws is that unauthorized access of a computer system with another’s credentials is a crime. That’s been drilled into me at various organizations where no one was supposed to share any credentials. That is something many sysadmins drill into users, while sharing their own system level credentials for accounts like root, sa, system, and more. I’ve seen no shortage of these accounts in use across multiple services or multiple people from those that would admonish a manager for sharing their password with an assistant.
This week Troy Hunt wrote about politicians sharing passwords, and the problems with doing so. It’s an interesting read, and certainly points out that the expediency of having users share a workload has plenty of downsides in accountability and auditing of actions. I think there’s little excuse for sharing security credentials in UK gvoernment as there are other solutions to handle this issue. I am more sympathetic in real time environments, like hospitals, where the login process might literally cause a death in the event of a delay.
Leaving aside the authentication aspect, we often share data among individuals inside of an organization. In fact, outside of sysadmins, there might not be many people that really understand who should have access, let alone who has access, to some data. In fact, over time it seems that most organizations tend to lean towards allowing an ever-growing number of people to access data in file shares. While we might prevent database access and grant/revoke this at times, the output from our systems often ends up in Excel sheets or other files and people that might not have direct access still see the data.
The real physical world is just as bad, since many people may leave data lying around on desks or tacked to a wall. Just like credentials on a post-it, we have lots of data available for others to read, though physical access is required. However, have you thought about how many people have physical access? It’s not just your co-workers, but also janitorial staff, tradespeople, and others likely wander regularly through your office spaces.
Security is a tough battle, one that is interesting in that most of the time we don’t need much more than good passwords. Most people don’t have the time or inclination to deal with their own data, much less yours. However, when an attack is targeted on your organization, from outside or within, it’s extremely difficult to ensure data won’t get lost.
I don’t have a great solution, but I do think that there are good reasons to limit access to data on our systems, not the least of which is auditing and accountability. Beyond that, however, we have to hope that our users have some judgment about with whom they may share reports and other data.
Steve Jones
Posted in Editorial | Tagged | Leave a comment

SSMS Updates in 17.4

SSMS 17.4 is out, and if you haven’t upgraded from SQL 204, 2012, 2008, etc., you should do so. You can run your bundled SSMS side by side with the new versions, but it seems that the 17x versions are improving andbecoming more stable.

You can download the new version and use it for free. That alone should be a reason to update your workstations.

This update includes a few new interesting items.

  • Vulnerability Assessment – Scanning for security and misconfiguration issues
  • Always On Dashboard – New latency reports
  • Showplan – some fixes and new operator icons
  • XE Profiler – Very cool to see this being enhanced.

There are other updates and fixes as well. I’m sure there are some bugs, but for the most part, I think that SSMS 17.x is the way to go forward.

Posted in Blog | Tagged , | Leave a comment

Singular or Plural

There are all sorts of interesting debates that developers will have about programming. One of those I’ve written about in the past is Spaces v Tabs. There’s a lot of discussion about that one, and certainly no share of strong opinions. I’m sure there are plenty of other development paradigms and habits that will create debate, arguments, and perhaps strain some working relationships.

While I’m not looking to upset anyone, I ran across a discussion recently and thought this would make an interesting debate. If you’ve got an opinion, please share how and why you might choose to follow your convention. Examples are helpful and may enhance the reasons why you go choose to build systems one way or the other.

If you examine any sample databases out there, you will run across tables named like this: Customers, Orders, Cities, etc. You might also find Product, Person, Address as well. If you are paying attention, you might notice that my examples are both singular and plural in their form. There are no shortage of debates on the topic, but I’m wondering what many of you think.

I’ve tended to build tables with plural nouns in the past, but I think that’s because the first few people that taught me did that. As I’ve read more and listened to others explain their design decisions, I’ve embraced singular names. After all, the entity being modeled is often a single instance of a type. A Person or a CreditCard, not a series of them, though we could certainly argue the table is a set of people, so use that.

Ultimately I’m not sure that it matters much in any particular application. We certainly have databases that using each convention, and probably a few that use both inside the same schema. I think choosing an entity name that is easy to understand is important, and maybe the idea of singular or plural names matters less. After all, as long as you’re not naming tables F42 and H1492, everyone will get used to the convention.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio (3.5 MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment


I got sent an interview by Mohammad Darab, a fellow SQL Server professional in Northern Virginia. I thought the questions were interesting, and spent a bit of time on them. You can read the interview here:

Posted in Blog | Tagged , | 2 Comments

Live on Channel 9 with ReadyRoll

I was interviewed in Redmond a couple months ago and the video was just published. I show and demo some database migration work with ReadyRoll in this video:

Posted in Blog | Tagged , , , | Leave a comment

The Relational Database is the Default

It seems that I regularly meet people in the SQL community that are asked about moving their datastore to a NoSQL type of database. In a mature system, that might be a conversation worth having, especially if there are issues with handling the workload. I doubt there are many type of workloads that I would think are worth moving, but perhaps some exist. However, the discussion about moving is one you can have, given your understanding of the deficiencies you see in your datastore.

Often I often find is a different story for new applications.  Often I hear about developers becoming enamored with a datastore, a simpler one than SQL Server or some other RDBMS, and look to MongoDB, Cassandra, Hadoop, Neo4J, etc. as the shiny, new, exciting choice. They think NoSQL offers an easier way to build their application, where they can throw and grab JSON through an API, while changing their minds on the ways in which they need to handle data.

They approach a new application like a startup would, using some exciting technology that has just proven to be successful by some hot company in the news. They look forward to developing an application that other developers will be jealous of next year. Or they just want to use something new that’s being written about on blogs and spoken about at conferences. Or they just want to do something more interesting on the next project. All these might be great approaches, and certainly are ways that make one’s job more engaging.

However. Just as this piece notes, choosing a new datastore because it’s new and exciting, and not because it fits the data model is a mistake. The data model matters, often because this is the permanent output of your software application. The data will live on and needs to meet current and future needs, separate from the amazing Node.JS framework of the week that your users love. Actually, users don’t love your framework. They love your work. Whether it’s MVC, Angular, Django, or Dapper, most clients could care less. Any language/framework/platform can build an application that works well for clients. Your application just has to work.

Relational databases work really, really, really well. They handle most problem domains well, with protection against common consistency errors, and they have plenty of features that help you build a solid data model. There are tips and tricks to migrate your schema to some new form if you need to. However, if you need to migrate your schema every few days, you haven’t really thought about the schema. As the article notes, you need a schema. On read, on write, or both, you’re not getting away with ignoring schema.

And by the way, if you keep your schema in Python or C# or some other language, you’re likely creating a set of “legacy code” that will haunt you for years and is much harder to manage than database schema migrations. Maybe it feels less risky, but it’s probably way less fun to work on.

Databases are a pain to work with. I get that. It’s so much more cumbersome than refactoring C# classes on the fly. Databases have a different job, and they do it well. If you have a good reason to choose some other datastore that’s related to your problem domain and data model, then go ahead and choose something else. If you don’t, then use SQL Server (or PostgreSQL, Oracle, etc.). Most of you won’t, so just stick with relational databases. You might not thank me, but your organization and future developers will.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment