Join me for Masking, Privacy, Protection, and Clones at SQL in the City Feb 28

Registration is open for SQL in the City 2018. Our first event is Feb 28 and I’ll be heading back to the UK for the event. This time Grant and Kathi will be there and we’ll be joined by the amazing Ike Ellis. This is an event to watch.

I’ll be covering some GDPR stuff about compliance, with good information that all DBAs and system administrators should know. This is a prove you’re doing what you have decided to do session.

In the afternoon I’ll also cover some enhancements to SQL Clone in our Privacy bundle that helps you mask data from production and build your dev/test environments in a way that can protect your sensitive data.

Some Redgate products will be used to show how to accomplish the same tasks, but all of us will be talking about core concepts and ideas that you need to know. We’d like you to consider our software if it provides you with value for the cost, but either way, you will learn about things that we all think are important.

Register today and I’ll see you in a few weeks.

Posted in Blog | Tagged , , | Leave a comment

Valentines Day for Data Professionals

It’s Valentine’s Day today, and if you’ve forgotten, you still have time to do a little shopping or show your partner that you value them. While this isn’t a public holiday in any country that I know of, many people do want to celebrate with someone special to them. In fact, if you forget about this day, chances are the next few might not be so pleasant.

For many of us, we’ll spend nearly as much time at work each week as we might spend with our families. Between a few extra hours here and there at work, commuting, and sleep, it may often feel like we spend more time at work. Hopefully that’s not the case, and if it is, you should re-evaluate your current position.

Most of us do have a relationship with our career. We’ve chosen it for some reason and are often locked into doing certain work in the short term. With today being a relationship holiday, I wanted to ask you to think about if you really love your career?

I’m not speaking about your current job and employer. You might enjoy being a developer, DBA, or other data professional while not being enamored with the position and place you work now. I’m asking for the long term, are you happy with your career choice and looking to continue moving on this path.

If you are, good for you, and I hope you find a great position that suits your life. If not, it doesn’t matter what you’ve done in the past. You can start moving forward in another direction today.

I love my job, and I’ve enjoyed the last 25 years of working with SQL Server. I don’t know if I want to do another 25, but I certainly am still in love with my choice today.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Cloud Safe

I saw a question recently from an individual that was trying to decide if it made sense to move their databases to the cloud. In this case, management wanted to move, but the technical staff had concerns about Disaster Recovery in the cloud. These are very valid concerns for any system, as technical staff is often responsible for data, regardless of who decides on the architecture of a system.

The concerns got me thinking a bit. Is the cloud safer for DR? It’s certainly an “it depends” question, especially as the “cloud” isn’t necessarily the same thing for each of us. Some of us would use IaaS, with VMs in the cloud. Others might choose PaaS, with RDS or Azure SQL Database as a platform. Still others might want something like Managed SQL Instances, which is like IaaS+, or maybe PaaS#. I’m not completely sure how to classify this.

In any case, your choice of cloud architecture can mean better or worse DR. The closer you are to IaaS, the more that you still have the same responsibilities that you might have inside your own data center. The difference is that hardware replacements or options are often quicker to procure, though perhaps with limited choice.

If you choose PaaS, then you have different DR capabilities and responsibilities. Your vendor might handle some aspects of DR and remove the need for you to worry about hardware, or regular backups, but you might need to worry about other items. Your vendor might give you PIT recovery, but you might not want a database replacement in a busy system, especially if you’ve processed a few thousand transactions since someone ran that UPDATE without a WHERE batch. In that case, perhaps you want to ensure you can restore your database elsewhere, or you have other options.

Many of us know that managing systems is complex work. Not every environment can be handled in the same way, and we often implement exceptions in both technology and staff knowledge. Ensuring your application and environment can recovery from a DR situation often requires detailed knowledge of both requirements and capabilities of the environment. While I’m not afraid of migrating to the cloud, I’d want to be sure I was prepared to answer questions from management if there are issues. After all, they’re going to look to me, not some vendor, for answers.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.4MB) podcast or subscribe to the feed at iTunes and Libsyn.


Posted in Editorial | Tagged , | Leave a comment

Door Number One for T-SQL Tuesday #99

tsqltuesdayThis is a good month for T-SQL Tuesday. It’s the end of single digits for this blog party, which is amazing. I can’t believe we’re 99 months in. I’m sure Adam Machanic is proud of his efforts here.

Aaron Bertrand hosts this month, and he’s given us a choice: door #1 or door #2. I’m choosing door #1.

Daddy’s Girl

I grew up in Virginia Beach, going through high school and playing elementary school volleyball where you could only score when you served. I loved the game, and eventually started to play on the beach during the summers at college and after I graduated. The game faded for me as I got married, had kids, and got busy with my career.

When my daughter was 8, I signed her up for a recreational league. They needed help coaching, and I volunteered. With another parent, we coached the same group of 7 or 8 girls for 4 years, 8 seasons, and had a great time. A few of us parents also got involved in adult leagues, playing at various levels.

When my daughter turned 13, she moved to a competitive league, as I watched. Across the years, I’ve watched my daughter play and love the game. As she became more independent, I started to spend time helping other teams, learning the modern game with rally scoring, liberos, and more.

This year I’ve taken on coaching a team of 14 year olds. Two nights a week, some Saturdays, and as I’m doing less traveling for work, I’m spending my weekends coaching at tournaments, and having a great time.  As my body ages, and I spend less time at my own activities, I’ve turned to coaching others, and I love it. It’s a great way for Coach Steve to spend some time away from work.

Posted in Blog | Tagged | Leave a comment

Migrating MySQL Databases

A little out of my area, but I need to move the database to a new service. The company running the db decided that they don’t want to keep a lower level consumer type tier. Since their minimum plan is now $20/month, I decided to look elsewhere. My first move was to try Azure Database for MySQL. This is a quick look at moving the MySQL data.

I have MySQL Workbench, so I started that up. I saw a note that this contained a migration wizard. I first created a MySQL database in Azure, which is really simple. Then I created two connections from MySQL Workbench. One to the existing database and one to my new one.

2018-02-08 13_50_54-MySQL Workbench

The next step was to start the Migration Wizard. This is under the Database menu.

2018-02-08 13_51_07-MySQL Workbench

As with all wizards, this opens with a welcome page.  I clicked Start Migration to get going.

2018-02-08 13_51_24-Why GDPR Will Fast Track the Fourth Industrial Revolution _ HuffPost

The wizard is fairly simple. Once I had this working, I walked through the wizard, choosing my source and destination. This was a simple process and things connected up once I’d opened the firewall.

2018-02-08 13_51_58-MySQL Workbench

After verifying connections, I selected everything. Since this is a WordPress site, it’s a fairly small schema and set of objects.

2018-02-08 13_52_21-MySQL Workbench

I basically clicked next, next, and let everything transfer. This only took about 5 minutes.

2018-02-08 13_52_40-MySQL Workbench

Once this was done, I queried the database, and everything was working. I saw my data and it appeared complete. No errors, so I assumed things were correct. Row counts seemed to verify this.

2018-02-08 13_55_58-MySQL Workbench

This was pretty simple, perhaps even easier than using SSIS and the Copy Database Task since this was built in.

Now to figure out how to re-point the application.

Posted in Blog | Tagged , , | Leave a comment

Production Scripts

One of the most useful parts of SQL Server is the SQL Agent scheduler. Over the years I’ve used this subsystem to automate work and ease the administrative burden of running a SQL Server instance. At times I’ve even used my SQL Agent to fire off business reports and alerts to nontechnical people that might need to take some action based on the data in our system.

One common task that I see SQL Agent used for is to backup a database using some method (Ola’s scriptsSQL Backup, etc.) and then delete the oldest backup. This is a common way of ensuring that you keep xx amount of backups around for your business. Often if the first step (the backup) fails, then the second step (delete backup files) doesn’t run. There are times where the execution choices between the steps aren’t configured or get changed, and you run into the situation that Paul Randall describes in issue #170 of the SQLskills newsletter. The backup step starts to fail, but the deletes keep running until you don’t have any more backups stored.

How does this happen? It’s surprisingly easy because most people don’t really treat their SQL Agent scripts and code like other code. This often isn’t tested well, and rarely tested across time and with unusual conditions. After all, who wants to muck with the time on a production server to be sure that your Agent job works across days? Who wants to force a backup to fail to see how the job handles an issue? Who wants to double check their code when BACKUP is fairly simple syntax and a few quick tests of the delete code works with text files renamed with .bak extensions?

Do you treat your SQL Agent jobs like the production code that they contain? You should. In fact, moving to a more reliable, repeatable, DevOps style environment means that any code in an Agent job needs to be version controlled, it needs to be tested, and it should be a part of some (hopefully, automated) deployment process that ensures that changes to the code are recorded and you are confident of which version of code is on your system.

SQL Agent is a powerful tool, but it’s also one that should be treated like a production system. Downtime and simple errors from careless scripting shouldn’t be tolerated. We should, and can, do better.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

What to Read in 2018

Last week I was looking for Database Weekly links and ran across Gail Shaw’s list of books from 2017. It’s an interesting list of fiction, and I might pick up a few this year if I run out of ideas. This reminds me of Paul Randal’s review (done annually) of his reading. I usually find a book or two in there that I’d like to read to enjoy a break from life or expand my horizons.

I was thinking about this as I’ve seen a few other posts from software and database users with books that they recommend. This was an interesting list from Xiaohan Zeng and I like this general list from John Sansom for SQL Server. I’ve run across a few others, especially from all the learning plans that people built for T-SQL Tuesday #97.

I have no shortage of fiction that I enjoy reading on a regular basis. This is a way for me to get a break from life, and I’m usually working my way through some book. This month I read the Will Robie series from David Balducci (1-4) and then started re-reading the Jesse Stone series by Robert Parker and others. Those are my downtimes and breaks from life. In the professional area, I’ve been working my way through R in a Nutshell as well as Thoughtful Machine Learning in Python. The latter has been ongoing for some time, and moving rather slow. It’s a tough one to go through.

There are classic texts in our industry, as well as some really interesting ones that many of us might enjoy. Perhaps there is something you’ve read in the last year you recommend, or there are some you want to tackle. Today I’m asking for your reading recommendations.

In our careers, I know many of us like shorter, focused posts, some like videos, and others prefer learning in person from others. Those are all great ways to learn, but I still like working through a book to either focus my learning or escape from life. Let us know today what you plan to read this year or what you’d recommend for others, either to help them become better database developers or just enjoy a good story.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Moving Lookup Data with ReadyRoll

I’ve been using ReadyRoll for a small project and wanted to move some lookup data. There are some tables where I want to keep data in sync across environments, so if my build and release pipeline can do this, great. ReadyRoll makes this easy, and this is a short post to show how.

The Current Environment

I’ve got a table in my dev database that has SQL Server versions. I can see the list here:

2018-01-31 12_15_05-SQLQuery6.sql - (LocalDB)_Projectsv13.SSBuilds_1Dev (PLATO_Steve (51))_ - Micros

In my QA and staging systems, I see this (only one is shown, but it’s the same in both).

2018-01-31 12_13_41-SQLQuery5.sql - (vstsdeplo

The goal is to have this data migrate with other changes in a build and release pipeline.

Adding Data to the Project

I’ve already built this as a ReadyRoll project and set up a build and release pipeline in VSTS. Now I want to include my data.

In the ReadyRoll tool window, I have refreshed the project and I see everything in sync. That’s good, and that’s how I want to be starting a small section of development.

2018-01-31 12_11_09-builds_azure - Microsoft Visual Studio

I want to find my table, so I need to expand the “Identical Objects”. When I do that, I see a list of all objects in the database.

2018-01-31 12_17_22-builds_azure - Microsoft Visual Studio

I can scroll down to find my table. When I do, I’ll right click it to get a few options. One of these is “Include Table Data”. We want to pick this one.

2018-01-31 12_20_01-

Once I do that, I get a note to refresh again. This is so RR can determine what table data needs to be added to a migration script.

2018-01-31 12_21_07-builds_azure - Microsoft Visual Studio

My changes show that I need to get 11 rows of data into a new script.

2018-01-31 12_21_44-builds_azure - Microsoft Visual Studio

I click Import and generate script, which will build my migration script and add it to the project. In my case, this is script 4.

2018-01-31 12_23_00-builds_azure - Microsoft Visual Studio

Note that this script has a few things in it, based on a call to SQL Data Compare in the background. First, it sets a dateformat. Next, it does the insert for an empty table, since that is the situation I’m in. Last, this uses the SET IDENTITY_INSERT option.

If you don’t like these options, change them here. You can alter this script to suit your environment. Remove the IF, let the identities be what they are, make the changes that matter to you.

The only thing I’ll do is click the script name twice to edit it.

2018-01-31 12_25_08-builds_azure - Microsoft Visual Studio

The numeric sequencing is important. The rest, not so much. I’ll choose something simple here.

2018-01-31 12_25_21-builds_azure - Microsoft Visual Studio

Build the Project

I can build locally, and I always should to be sure things work. Once I see this and I’ve tested a few things, I’ll let the “system” do more work.

2018-01-31 12_28_32-builds_azure - Microsoft Visual Studio

The nice thing about ReadyRoll is that I can include my database project alongside application projects, if I want. For application developers or hybrid developers, I can use a consistent interface for saving changes to version control.

In Team Explorer, I’ll check changes, add a comment, and then Commit and Push.

2018-01-31 12_30_36-builds_azure - Microsoft Visual Studio

If I go to VSTS, I’ll see the build in progress. I’m using a local build agent and usually within 15 seconds, my build will start. Sometimes it’s really fast.

2018-01-31 12_31_04-builds_azure-CI summary

If I click the build number, I can see I’ve barely caught this before it finished.

2018-01-31 12_31_19-Build 36

The build completes, so now I need to check releases. I have a CD trigger that will deploy to a local QA instance when the build succeeds. If I go look at the release, I’ll see that’s occurred.

2018-01-31 12_35_49-Release-14 - Visual Studio Team Services

This worked, so let’s check the QA instance. As you can see, my data was deployed.

2018-01-31 12_36_43-SQLQuery7.sql - (local)_SQL2016.ssbuilds_rr_sjones_2_integration (PLATO_Steve (7

If you were watching closely, you see I have a small data issue. I need to correct that, but that’s for another post.

ReadyRoll is a part of the SQL Toolbelt, and if you’re a customer, I’d urge you to start a PoC and see what you think of the tool. ReadyRoll Core, with limited features, is included with Visual Studio Enterprise, and if you’re a customer of Microsoft, you can try that. We also have 14 day trials if you’d like to do this on your own.

Posted in Blog | Tagged , , , | Leave a comment

The Learning Choice

I advocate learning constantly for those of us in technology. This might be especially important for data professionals as we are at the nexus of so muc activity these days. Data is important, our systems are under scrutiny for security, there is an every growing set of applications that want to connect, and we often struggle to keep up in a fast-paced, DevOps world. It feels like my plate is overflowing with the need to understand PowerShell, JSON, the cloud, Hadoop and its myriad of related technologies on the Microsoft Stack, machine learning, data science, statistics, and when I have time, make sure I’m up to date with SQL.

Recently I’ve run across a few pieces that ask the question about where to spend time. There’s a blog on R v Python (I’ve seen quite a few of these) and a thread on deciding if ML skills are something a database developer should learn. These are just a few of the ones I’ve seen in the last month, as I guess lots of people are thinking and writing about New Year resolutions and goals.

We have a limited amount of time for learning each day, week, month, or year. We might be able to increase that for a short time, perhaps months, but all of us need some balance in life. Given that there’s a limited amount of time, and that we need some focus and practice over time to absorb skills, where do you spend your effort in learning?

Jeff Moden is an advocate of you sharpening your T-SQL skills with deep learning, as he sees this as one of the main reasons that applications perform poorly (and people post in our forums). Others lean towards adding new skills, perhaps adding SSIS proficiency or transitioning your analysis skills to Extended Events. Still others might seek to become more generally rounded by spending time on C# or Python.

There are also other views, such as focusing on the skills you need in your current position. If you’re not happy, maybe you want to devote learning effort to get a job in an area where you’re passionate. Or maybe you just want to make more money, and are willing to spend time in those technologies that are paying more.

The choice is up to you, and I can’t give you the answer that suits your path. You must find a balance between work and the rest of life, and make the choices that best fits you. I don’t want to imply this is easy. Thinking about the things that really matter to you and affect how you approach each day is a difficult exercise. My advice is that you go in one of two ways. If your career is something that you feel passionate about, focus your learning in a way that will help you find a job that brings you purpose. If you just need a job to pay the bills, learn things that increase your earnings, but ensure that you have family, faith, hobbies, or some other endeavor outside of work that brings purpose and fulfillment to you.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 5.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Getting Your SET Options

Have you ever used @@options? I haven’t typically needed this, but there are times that you might wonder what options are set for a session. You can check in SSMS, look for defaults, etc., but there’s an easy way.


This uses the @@options function, which contains a bitmap of your session settings. There are defaults with the sp_configure user options, but these can be overridden.

If you want to get the options, which are also stored as bitmaps in various places, you can code them with this BOL article.

Or use a script like this:

SELECT ConstraintChecking = CASE WHEN (@i & 1 = 1 ) THEN 'ON' ELSE 'OFF' end,
        ImplicitTransactions = CASE WHEN (@i & 2 = 2 ) THEN 'ON' ELSE 'OFF' end,
        CursorCloseonCommit = CASE WHEN (@i & 4 = 2 ) THEN 'ON' ELSE 'OFF' end,
        AnsiWarnings = CASE WHEN (@i & 8 = 8 ) THEN 'ON' ELSE 'OFF' end,
        AnsiPadding = CASE WHEN (@i & 16 = 16 ) THEN 'ON' ELSE 'OFF' end,
        AnsiNulls = CASE WHEN (@i & 32 = 32 ) THEN 'ON' ELSE 'OFF' end,
        ArithAbortSetting = CASE WHEN (@i & 64 = 64 ) THEN 'ON' ELSE 'OFF' end,
        ArithIgnoreSetting = CASE WHEN (@i & 128 = 128 ) THEN 'ON' ELSE 'OFF' end,
        QuotedIdentifer = CASE WHEN (@i & 256 = 256 ) THEN 'ON' ELSE 'OFF' end,
        NoCount = CASE WHEN (@i & 512 = 512 ) THEN 'ON' ELSE 'OFF' end,
        AnsiNullDefaultOn = CASE WHEN (@i & 1024 = 1024 ) THEN 'ON' ELSE 'OFF' end,
        AnsiNullDefaultOff = CASE WHEN (@i & 2048 = 2048 ) THEN 'ON' ELSE 'OFF' end,
        ConcatNullYieldsNull = CASE WHEN (@i & 4096 = 4096 ) THEN 'ON' ELSE 'OFF' end,
        NumericRoundAbort = CASE WHEN (@i & 8192 = 8192 ) THEN 'ON' ELSE 'OFF' end,
        XactAbort = CASE WHEN (@i & 16384 = 16384 ) THEN 'ON' ELSE 'OFF' END

For the row-based people, how about this:

SELECT 'ConstraintChecking', CASE WHEN (@i & 1 = 1 ) THEN 'ON' ELSE 'OFF' END
SELECT  'ImplicitTransactions', CASE WHEN (@i & 2 = 2 ) THEN 'ON' ELSE 'OFF' end
SELECT  'CursorCloseonCommit', CASE WHEN (@i & 4 = 4 ) THEN 'ON' ELSE 'OFF' end
SELECT  'AnsiWarnings', CASE WHEN (@i & 8 = 8 ) THEN 'ON' ELSE 'OFF' end
SELECT  'AnsiPadding', CASE WHEN (@i & 16 = 16 ) THEN 'ON' ELSE 'OFF' end
SELECT  'AnsiNulls', CASE WHEN (@i & 32 = 32 ) THEN 'ON' ELSE 'OFF' end
SELECT  'ArithAbortSetting', CASE WHEN (@i & 64 = 64 ) THEN 'ON' ELSE 'OFF' end
SELECT  'ArithIgnoreSetting', CASE WHEN (@i & 128 = 128 ) THEN 'ON' ELSE 'OFF' end
SELECT  'QuotedIdentifer', CASE WHEN (@i & 256 = 256 ) THEN 'ON' ELSE 'OFF' end
SELECT  'NoCount', CASE WHEN (@i & 512 = 512 ) THEN 'ON' ELSE 'OFF' end
SELECT  'AnsiNullDefaultOn', CASE WHEN (@i & 1024 = 1024 ) THEN 'ON' ELSE 'OFF' end
SELECT  'AnsiNullDefaultOff', CASE WHEN (@i & 2048 = 2048 ) THEN 'ON' ELSE 'OFF' end
SELECT  'ConcatNullYieldsNull', CASE WHEN (@i & 4096 = 4096 ) THEN 'ON' ELSE 'OFF' end
SELECT  'NumericRoundAbort', CASE WHEN (@i & 8192 = 8192 ) THEN 'ON' ELSE 'OFF' end
SELECT  'XactAbort', CASE WHEN (@i & 16384 = 16384 ) THEN 'ON' ELSE 'OFF' END

UPDATE: From the comments, an UNPIVOT might be easier to read for some of you.

SELECT [Option], [Value]
SELECT ConstraintChecking = CASE WHEN (@i & 1 = 1 ) THEN ‘ON’ ELSE ‘OFF’ end,
ImplicitTransactions = CASE WHEN (@i & 2 = 2 ) THEN ‘ON’ ELSE ‘OFF’ end,
CursorCloseonCommit = CASE WHEN (@i & 4 = 2 ) THEN ‘ON’ ELSE ‘OFF’ end,
AnsiWarnings = CASE WHEN (@i & 8 = 8 ) THEN ‘ON’ ELSE ‘OFF’ end,
AnsiPadding = CASE WHEN (@i & 16 = 16 ) THEN ‘ON’ ELSE ‘OFF’ end,
AnsiNulls = CASE WHEN (@i & 32 = 32 ) THEN ‘ON’ ELSE ‘OFF’ end,
ArithAbortSetting = CASE WHEN (@i & 64 = 64 ) THEN ‘ON’ ELSE ‘OFF’ end,
ArithIgnoreSetting = CASE WHEN (@i & 128 = 128 ) THEN ‘ON’ ELSE ‘OFF’ end,
QuotedIdentifer = CASE WHEN (@i & 256 = 256 ) THEN ‘ON’ ELSE ‘OFF’ end,
NoCount = CASE WHEN (@i & 512 = 512 ) THEN ‘ON’ ELSE ‘OFF’ end,
AnsiNullDefaultOn = CASE WHEN (@i & 1024 = 1024 ) THEN ‘ON’ ELSE ‘OFF’ end,
AnsiNullDefaultOff = CASE WHEN (@i & 2048 = 2048 ) THEN ‘ON’ ELSE ‘OFF’ end,
ConcatNullYieldsNull = CASE WHEN (@i & 4096 = 4096 ) THEN ‘ON’ ELSE ‘OFF’ end,
NumericRoundAbort = CASE WHEN (@i & 8192 = 8192 ) THEN ‘ON’ ELSE ‘OFF’ end,
XactAbort = CASE WHEN (@i & 16384 = 16384 ) THEN ‘ON’ ELSE ‘OFF’ END) AS options
[Value] FOR [OPTION] IN ([ConstraintChecking],[ImplicitTransactions],[CursorCloseonCommit],[AnsiWarnings],[AnsiPadding],[AnsiNulls],[ArithAbortSetting],[ArithIgnoreSetting],[QuotedIdentifer],[NoCount],[AnsiNullDefaultOn],[AnsiNullDefaultOff],[ConcatNullYieldsNull],[NumericRoundAbort],[XactAbort])
) AS T1
ORDER BY [Option]
Posted in Blog | Tagged , , | 3 Comments