What’s a SQL Compare Snapshot?

I saw a short video from Grant Fritchey on the Redgate channel that shows how to cerate a snapshot in SQL Compare. I think snapshots are great, and there are lots of uses (more on that later), but I do find sometimes I use the term and people are confused.

Snapshots

This is one of those overloaded terms. We have SAN snapshots, backup snapshots, database snapshots, and more that many of us deal with. All of these really mean a point in time view of that particular data set (image, machine, disk, database, etc.).

In SQL Server, we’ve had DB snapshots, which create a second database that is a copy as of a point in time of the original There are lots of restrictions on these, but they are useful for quick rollbacks during deployments, or point in time reporting, etc. Useful little creatures.

SQL Compare Snapshots

Many of you know SQL Compare as a tool that looks at two databases, gives you the differences in all objects, allows filtering, and also builds a deployment script to make the target database, DatabaseB, look like the source database, DatabaseA. In other words, if I add a table and view to the Source, and compare this with a destination, I’d see the new table and view as differences that I need to deploy.

That’s great when working with databases, however sometimes I don’t have control over a database and can’t be sure that it has a stable codebase (no changes). SQL Compare snapshots let me copy over the state of the database into a folder.

If I grab a random database and make a snapshot, the database looks like this:

2017-09-14 18_22_31-SQLQuery1.sql - (local)_SQL2016.sandbox (PLATO_Steve (72))_ - Microsoft SQL Serv

and the snapshot looks like this:

2017-09-14 18_23_24-Snapshots

Not quite the same. However, all the code is in the snapshot. It’s a binary file with all the code. If I use this as a target in SQL Compare:

2017-09-14 18_24_14-New project_

I’ll see this if I haven’t changed the database.

2017-09-14 18_25_17-SQL Compare - New project_

Why Use Snapshots?

One of the main reasons I like to use snapshots is there are a stable view of code, like a tag or branch in a VCS, but they aren’t modified after created. They are a great way for me to capture the state of my code after a deployment (or before).

Later I can compare my snapshot with the database and detect changes. This is great for detecting production drift where someone might change production and I’m unaware.

I’m sure you could come up with other uses for a point in time view of your code.

Posted in Blog | Tagged , , | Leave a comment

Test Database Compatability with dbatools

I really like the dbatools project. This is a set of PowerShell cmdlets that are built by the community and incredibly useful for migrations between SQL Servers, but also for various administrative actions. I have a short series on these items.

One of the items that few DBAs think about often is compatibility. We tend to create databases with the default level, based on the instance. If we upgrade an instance, I’ve seen many people assume the level changes with the upgrade script (it does). However, if you restore a database, or attach one, the compatibility level might not match, and that could cause issues.

The dbatools project has thought of this, and they have a great cmdlet that can help you build checks of all your databases. The Test-DbaDatabaseCompatibility cmdlet will return the databases and whether they match the server level. You can also get more details if you like. Here’s a quick view. I can run:

Test-DbaDatabaseCompatibility -SqlServer .\SQL2016

This gives me a lot of databases on this instance. In fact, they scroll off the results quickly.

2017-09-14 10_50_03-Windows PowerShell ISE

I don’t really worry about those databases that match. In fact, I expect that most databases do. My rule of DBA information is often that I want to only see those items that might cause me to take some action. Here, I only want those items that don’t match.

I can easily use filtering to do this:

2017-09-14 10_55_43-Windows PowerShell ISE

Of course, I could simple wrap this in a function as well, add this in a pipeline that takes a list of instances, or more. For me, I’d want each instance to check itself with this cmdlet and record the results.  That way I could periodically review the list and decide if I can actually can change any of these.

I could also put a query in my SQL Monitor monitoring system as a custom metric and have this tracked there, but I wouldn’t want alerts for this. It’s not critical enough to interrupt me too often.

Posted in Blog | Tagged , , | Leave a comment

Interviews: What is ACID?

I once had a job interview with a large group of employees at a company. They were arranged around a table, with me at one end. The interview was a round robin affair, with someone asking me a question and me answering. They would take turns, going around the table for a couple hours. Not the easiest or hardest interview I’ve had, but one of the more interesting.

The first question from the gentleman to my left was, “What does ACID mean?”. My answer: “Isolated, Consistent, …”

I don’t have a problem taking a few seconds to think, but I have that internal clock, much like an American football NFL quarterback. Mine is not the 5 second one many QBs have, but I know that after about 20 seconds, people are wondering if I’ll answer at all. Or if I’m still focused on the interview. If I go 30 seconds, I might have blown the interview. What would you do here? Do you know the other terms? More importantly, can you answer at a level beyond the words of the acronym?

My response after 10 seconds or so was to admit I couldn’t remember the word. I could google for the meaning, but I did know this applies to relational database transactions, intending to ensure that we always have a known state for our data. We know that each transaction must full complete or be completely undone. We can’t have the classic issue of depositing money in one account and failing to remove it from another. We also cannot have other transactions interfere with other’s work. This means a transaction on a piece of data must complete before the next transaction can modify it. We also must ensure that if the system were to crash, our database could not restart with data in an unknown state. Therefore, in SQL Server, we write to the log first, ensuring the transaction is complete before we can be sure the data changes are hardened. There are more details you could add to my answer, but this is a core foundational part of relational databases.

I have never been asked this question in another interview, but I do think this is one of those core concepts that helps me understand and explain other parts of SQL Server, and even of how to build software. I think understanding this will help you answer other interview questions with more depth and knowledge. Hopefully, you’ll have a better answer than I did, with the actual words that make up the acronym.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 2 Comments

The Library Job Hunt

I was reminded of the ways I’ve looked for jobs by a post from the Job Headhunter about library time. Rather than looking for what jobs are available, the recommendation is to take a targeted approach of using the library as a place to focus your energies and find the job that you really want, or at least type of job you want.

Far too many of us look for jobs when we need them, often feeling financial pressures to accept the first offer we get. The pace of hiring usually prevents us from considering multiple offers, as the timing of interviews may not align. We make decisions for expediency, more than any other reason.

We also usually fall from job to job, perhaps accepting a position that doesn’t quite suit us, or keeps us in the same position when we’d rather move. Again, time and financial pressures can force bad decisions. There’s also the fact that despite the training and schooling many of us have gone through, we don’t usually think about other options in the hectic, day-to-day chaos of our lives.

The Job Headhunter recommends spending time in the library researching and thinking about different jobs. Certainly you could do this from the comfort of your home with technology, but I agree with the premise. The library can help you focus without the distractions at home. You can spend time thinking about your life, your goals, your interests, your skills, and more. Come up with a plan that drives your career forward. Certainly your research might give you some unexpected answer, but likely this will help you focus on some intersection of your skills and interests.

Life is short, and there are opportunities out there for you. Choose the ones you want, hone your skills, and actively pursue the employment that fits in your life. The work, the company, the location, whatever is most important to you.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 1 Comment

Join me at VS Live – Anaheim

We’re just a month away from Visual Studio Live in Anaheim where I’ll be speaking along with many others. Join me today, register with ANSPK14 and save $400. This gives you the 4-day, all access conference package for a reduced rate.

California coding, with experts in DevOps, Cloud Computing, Databases, Visual Studio, Mobile, and more. This is a multi-technology conference, and you can learn about all those different technologies where you need to work.

The Agenda is packed, and I’ll be covering security and testing for SQL Server, but I’m looking forward to checking out some Azure, VS and VSTS tricks, Lambdas and more.

If you’re looking to learn some SQL Server and more this fall, think about joining me in California for a warm break. Register today and enjoy some fall learning in LA.

Posted in Blog | Tagged , , | Leave a comment

Experimenting with VSCode instead of the ISE

I’ve been playing with PowerShell here and there, learning a few things. As I use the dbatools project, or

I saw a tweet from Rob Sewell on using VS Code for PoSh, which I’ve tried, but it felt cumbersome and hard. I didn’t spend much time on it before dropping back to the ISE, which I was comfortable in.

However, Rob’s tweet mentioned a post from Mike Robbins on configuring VSCode, so I decided to give it another try. It’s not that the ISE doesn’t work well, but I keep hearing people talking about VSCode, which has git support, and so I thought this might be worth trying.

I won’t talk about anything in Mike’s post. You can try that out. I already at the PoSh extension, but it was upgraded since I first used it, and now pops the integrated console when I open a .ps1, which makes it like the ISE.

2017-09-12 08_27_56-newdetachall.ps1 — Visual Studio Code

I don’t remember that from before, and I certainly like that. I also skipped the VSCode-icons in place of VSCode Great Icons. I mean, I want to be great at this stuff.

The one thing I did do was change the settings from Mike to set a new zoom level. Zero is a little low for my old eyes, and 2 (where I’ve been working) is too big here. I chose 1.

vscode_ise

Using VSCode

The main thing you want to do with an editor is write code. So, I decided to write some simple code. I have a few things to do, but I decided to start with something simple. I wrote a few lines of PoSh. This isn’t terribly useful, but I wanted to play with the editor and the debugger.

2017-09-12 08_56_19-newdetachall.ps1 — Visual Studio Code

Now, in the ISE, I do get some intellisense and parameters. The parameters look better, but the screen capture ghosted them out a bit.

2017-09-12 08_57_07-Windows PowerShell ISE

In VSCode, this actually looks better. First, I enter a cmdlet name and see some basics.

2017-09-12 08_58_08-● newdetachall.ps1 — Visual Studio Code

After I’ve entered it, I see more details as I work with it.

2017-09-12 08_58_13-● newdetachall.ps1 — Visual Studio Code

Running code works OK. I can hit F5 and the code runs. Not ideal, as the function keys make my hands move off the keyboard. That isn’t different from the ISE. I definitely wish we had shortcuts, and I know there are extensions to help with this. I need to play with some.

The debugging, however, is much easier. More in line with VS and other debuggers. Set a breakpoint, F5, and I have real debugger controls if I want them (as well as F10, F11).

2017-09-12 09_03_32-newdetachall.ps1 — Visual Studio Code

I can see variables in the debug window, or in the main window if I hover. This feels a bit cleaner than the ISE.

2017-09-12 09_05_24-newdetachall.ps1 — Visual Studio Code

There still are some issues. F10 seems to disappear sometime. I’ve tried to remove breakpoints and they don’t always remove. However, those are pretty minor items in the scheme of things.

Overall, I’ve changed a bit of my view of VS Code here. It does seem to have improved since the last time I tried PoSh in here. Maybe I’ll give it another chance with other languages as well.

Posted in Blog | Tagged , | Leave a comment

The Full Court CosmosDB Press

This year at the PASS Summit, Dr. Rimma Nehme (b|t) will give another keynote address. She has spoken at a number of previous Summits, and is a very popular speaker. This time her topic is CosmosDB, which is where she’s been working at Microsoft for some time. I was lucky enough to see Dr. Nehme at Build 2017 talk some CosmosDB, and I’m sure she’s got more to say this now.

Actually, I’ve been a bit surprised by all of the writing about CosmosDB that I see taking place. Perhaps it’s that the CosmosDB Twitter account is very active, but it seems that so many .NET developers and other non-SQL Server professionals are talking about how they are building projects and platforms on CosmosDB. I follow plenty of SQL people, but it seems there is a tremendous amount of activity on CosmosDB (and R/Data Science), more than I see with plain old SQL Server (POSQL?).

When I first saw DocumentDB, the forerunner of CosmosDB, I thought it was a good platform for some workloads. Certainly the heavy write a document, read the same thing of the XBOX profiles and some of their games seems like a good use of the technology. When they added MongoDB, I was slightly more interested, but still not a lot. I really haven’t seen many workloads that seem to exceed what SQL Server can do in a relational platform. At least, not if you write efficient code and buy enough hardware. The change to CosmosDB, however, was really interesting. Allowing graph APIs, key values, and documents in one service is something I am sure is attractive to many developers. Especially the heavy use of JSON, which seems to be a format many developers prefer over datasets.

I know Microsoft would like more people to use CosmosDB as it’s completely a PaaS, pay-as-you-go service, and that could easily increase their revenue if they can get many people using the product. The have invested quite a bit into the platform, with latency guarantees, multiple consistency models, and plenty of scalability. It’s certainly on my list of topics to dig more into how CosmosDB works and where it might be applicable to solving data problems.

I think there are applications where a CosmosDB datastore will work well, but I wonder if this is more of just a front end, gather data service, that will require additional databases behind the service were data can be combined and aggregated for other reporting. I don’t know that querying your graph tables alongside key-values and documents will be as easy as it might be in a relational system, but hopefully we’ll see some people (perhaps me) experimenting and reporting back on the complexities and advantages of such a datastore.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

I want a new gadget with less features

Years ago I bought a Pebble smartwatch. I didn’t back the Kickstarter, but a few friends did and I liked the product. I wore mine for almost two years before it started to die. I can be rough on gadgets, so it’s possible I banged it and caused my own damage. I tried a Fitbit (original) briefly, but really liked having time on my wrist.

I got a Pebble 2, which I really like, but I’m losing it. I got my son a Pebble 2 as well, but recently he fell and cracked the screen on his. He’s in college, but since I’ve had the use of his motorcycle, I decided to give him mine.

That means I need a new watch. Since the Pebble watches aren’t being updated and maintained anymore (no thanks to Fitbit), I wasn’t sure I wanted to get another one. I could, as they are still for sale at various places. I decided to look at other devices, perhaps some that might have better data options.

I put a few queries out to friends, but I’m not quite sure that I have a better choice. As I think about it, there are only a few things that I really use.

  • Clock/time – Especially important as I end up speaking in different time zones
  • Vibrating alarm – This is my alarm clock and allows me to wake up easily, without disturbing my wife.
  • Waterproof – I do not want to take this off for a shower or when I swim. So, most Fitbits are out.
  • Steps – I like step tracking. It’s a nice metric that I like to watch periodically.
  • Heart rate – I don’t check this too often, but it’s a good metric that I do want to watch as I get older.

Some devices have these features (and many more), but many are also in the $200 range, which seems like a lot to me. Especially in this age of powerful, commodity hardware. The Pebble 2 did these things well, in a simple way.

I’m looking at reviews and comparing items based on recommendations, but more and more, it seems like another Pebble 2 might be the best choice. Maybe I should just buy 2, in anticipation of the future.

Posted in Blog | Tagged , | 3 Comments

Voting for Change

Microsoft does listen to us. They’ve made a number of changes in SQL Server 2016/2107 in response to community votes and requests. The main way to make these is through Connect, though keep in mind that lots of items get filed, and I don’t know how much consideration is given to suggestions for improvements and enhancements. I’m sure a significant amount of time is spent tracking down bugs and attempting to reproduce them.

The best way to work for change is to advocate your request and get others to vote on items. I’m sure there are hundreds, maybe thousands of good requests out there, and most of us don’t have time to review them. We’ll often vote only when we hear about an issue from friends or on Twitter. Even then, many of us place different priorities on issues, and might not vote for suggestions that we don’t find useful. However, attention does get votes, so let people know what you would like to see.

That being said, there are a few that I ran across and thought would be worth mentioning. One that I think is an easy choice, and should have been included a long time ago is the ability to increase the number of SQL Agent log files. Just as we can increase the engine log file count, we ought to be able to do the same thing for Agent logs. Auditing and proper security should ensure we have plenty of log files for busy systems. Plus, this should be easy to change. In line with that, why not more system health session files?

I like this one for hidden columns, which might be really handy for legacy code. What I’d really like is the string or binary data truncated error to be enhanced. I can’t, for the life of me, understand why this hasn’t gotten some sort of fix. It’s been lingering for far too long, wasting countless development hours. Maybe they could fix it with new virtual tables.

There are plenty of other issues in Connect, some of which might be very useful, quite a few of which are silly. I still think shining light on some of these and getting more votes might change the future of the product. At least, that’s what I’m optimistic about.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

The Code Coverage Report

I had someone that told me recently that they needed to find a way to get code coverage in T-SQL. Not for them, not to improve quality, but because their manager wanted a report.

OK, here it is.

First, build a nice SSRS report that displays your company logo, a header, the current date, and a pie chart. Yes, I know pie charts are bad, but they’re good for managers that want something like code coverage.

Now, for the data source of the report, here’s your query:

SELECT 90 + ((RAND() * 8)-2)

Now, you won’t always get over 90%, which is what some people want, but you’ll get over 90 eventually.

Posted in Blog | Tagged , , | 1 Comment