Global v Session Trace Flags

I wrote a short article on enabling and disabling trace flags. You can read it, but I didn’t really discuss the implications of session v global trace flags, which is something I’d like to do here.

In the article, I set trace flag 3226 for my session. This showed that a second backup wasn’t in the error log. Note the image below doesn’t have a backup message after (above) the trace flag change entry. You’ll have to trust me that I ran the backup, enabled the traceflag, and then re-ran the backup to get this image.

2017-03-02 11_54_50-Log File Viewer - ._sql2014

However, if I have a backup job, as I do here, does the trace flag affect this? This is, after all, run by SQL Agent, which would be a different session.

2017-03-02 12_36_08-Job Step Properties - backup

It turns out that the session trace flag doesn’t affect this. I ran the backup job and there was a message in the error log.

2017-03-02 11_56_19-Log File Viewer - ._sql2014

To suppress this, I’d have to use DBCC TRACEON (3226, –1) or put this in the startup parameters to ensure none of these messages appear.

Posted in Blog | Tagged , , | Leave a comment

Informing the Internal IT Team

Somehow across the course of my career, I’ve morphed from a strictly technical guy to a publisher and teacher that also gets to do some technical work. I never expected this, though perhaps I should have. Despite my disdain for writing in formal schooling, I learned early on to always document and log what I did. In fact, a few times my notes and tracking ended up ensuring that we weren’t chastised too harshly for our actions. Perhaps if even kept some people from being fired. I’m not sure, as I think it’s hard to get fired, but having some reasoning and documentation can provide some insurance against the boss getting too upset.

In my career, I’ve had to send out status items at different times, with various purposes. As with many of you, I’ve often had to document my time and efforts for my manager. I’ve been a part of teams that adopted the stand-up practice of sharing your workload with others. I’ve had to summarize the work of my team for a director or VP above me. Some of those efforts felt valuable, but often it seemed I was going through the motions and wasting valuable time on something that was more of a cover sheet on a TPS report than anything else.

There was a post on publishing an engineering weekly newsletter that reminded me of the various status reports I’ve built. While I haven’t had a weekly engineering letter, I have been in companies where a weekly or monthly newsletter was sent out. In fact, at one of my first jobs, we got a physical copy of monthly company newsletter delivered to our mail cubbies (anyone remember those?). While people would grumble about the contents or disbelieve them, I also noticed that many of them actually read the newsletter. I think it’s natural for people to want to be informed about the company, even if they are skeptical about the data.

The idea of publishing a weekly newsletter for your staff makes sense. It’s a good way to keep people on the same page, especially as your organization grows larger than a single team. I wouldn’t expect everyone to read every part of each issue, but that’s OK. They’ll get something out of it. This is also a good way for someone to fill time during a short break, see who’s out of the office, having a birthday, or changed positions. I think it’s also a good reminder for items like on-call or project releases. Even if I don’t have time to read it right away, finding information in a central place can be a hassle, and a quick search of email would let me check the person to call this week after hours.

Including technical content is always good, and allowing someone to peruse articles or new concepts on their own time, maybe even encouraging more learning subtlely, could improve the quality and morale of your staff. Of course, if you are an open, exciting company that employees enjoy, this works well. If management is oppressive and overbearing, then publishing a newsletter isn’t going to fix your culture. You’ve got other problems to fix first.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Create a BACPAC–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

One of the things that you might encounter at some point is the need to create a BACPAC. This is essentially a DACPAC with data included. There are times you might find a BACPAC more convenient than a full backup, especially if you work with Azure.

This post shows how you can create a BACPPAC file (with the extension .bacpac) from SSMS.

I have a small database, the PartsUnlimited database, from the Microsoft PartsUnlimited demo application. I want to create a BACPAC from this, however, when I right click and select Tasks, there is no “Create BACPAC” option.

2017-02-21 14_08_41-SQLQuery1.sql - dkranchlabdemo.database.windows.net.master (sjones (113)) - Micr

That’s fine, because the “Data Tier Application” is the DAC, and an export includes the data, which creates the BAC part of the process. Let’s choose “Export Data-tier Application”. Once we do that, we get the expected summary wizard screen to start.

2017-02-21 14_10_05-Export Data-tier Application 'PartsUnlimited'

I click next and immediately get prompted for a location in which to save the file. The will include all your data, so choose a location that has sufficient disk space. You have the option to also store this in Azure if needed.

2017-02-21 14_10_43-SQLQuery1.sql - dkranchlabdemo.database.windows.net.master (sjones (113)) - Micr

Note the filename above is already filled out by default, but you can change this in the edit box. If you click the “Advanced” tab, you’ll get the chance to select only certain tables if that is required. You can see this dialog below.

2017-02-21 14_10_50-Export Data-tier Application 'PartsUnlimited'

Once you’ve chosen a location, you click next and get a summary page.

2017-02-21 14_13_31-Export Data-tier Application 'PartsUnlimited'

Click finish, and the process starts. This runs very quickly, extracting the schema and then all the data, noting the results of each object.

2017-02-21 14_13_38-Export Data-tier Application 'PartsUnlimited'

That’s it. If I look in the location, I’ll see my BACPAC file.

2017-02-21 14_15_11-DAC Packages

I can copy this to another machine and import it to recreate a database. We’ll do that in another post.

SQLNewBlogger

A short, quick post. I’ve done this before, but I had to do this for a quick process and took 5 extra minutes to take screen shots, spending 5 minutes later writing this up.

Posted in Blog | Tagged , , | Leave a comment

The Livestream Product Launch

SQL in the City Streamed in 2016 was a success, with lots of feedback that people enjoyed the sessions and format. With the release of SQL Clone, we’ve decided to do it again.

The SQL Clone Product Launch Live Stream is coming in a couple weeks, on Mar 29, live from Cambridge, UK. I’ll be traveling there, along with Grant Fritchey, and we’ll both be helping broadcast all day, repeating sessions across many time zones.

Join us and see if SQL Clone is right for you. Or download an eval and give it a try.

Posted in Blog | Tagged , , , | Leave a comment

DevOps Isn’t Perfect

I saw a post recently from Grant Fritchey, Help Me, Help You, Deliver DevOps. In it, Grant shows that DevOps isn’t making developers do everything, it isn’t just giving developers more responsibility, and it isn’t developers handling some of the operational staff responsibilities. It’s about getting development and operations to work together. Hence, DevOps, not DEVops, or Dev    ops.

In the post, Grant also links to some fundamental failures from companies that supposedly have adopted DevOps processes. Companies like Gitlab and Amazon. There are plenty more examples, which certainly make some people question whether DevOps is better than other methods of building and deploying software.  How are the VSTS issues different from the Knight Capital debacle?

In some sense, they aren’t. In many cases, we’ve had a software issue that affected customers. People were upset, services weren’t delivered, and it’s entirely possible that revenue was lost. However, one of the things that I think differentiates DevOps is that developers and operational staff can troubleshoot and determine a root cause much quicker. Since the entire commit, build, test, release pipeline is instrumented and documented, your staff can quickly determine what new items might have caused an issue. If it’s not a new item, they can also quickly determine if that’s the case.

However, once you’ve placed your system in production, DevOps doesn’t ensure that your HA (High Availability) or DR (Disaster Recovery) preparations are adequate. Those are still the responsibility of your Ops staff, but by using DevOps, and storing configuration as code and provisioning your systems in all environments the same, hopefully you’ve taken advantage of this to actually ensure you have tested some HA or DR features in an environment outside of production.

However, DevOps isn’t prescriptive advice. DevOps isn’t a particular process or a tool that you use or install. DevOps is about learning and improving over time. For different companies, this means you’ll be at a different stage in your DevOps maturity than other companies. Some have complete deployment processes, but limited instrumentation and monitoring in production. Some have tons of data and deploy code perfectly, but no HA. Some have great HA, but poor deployment processes.

DevOps is about constant improvement and advancement to make your software development (Dev) and deployment (Dev and Ops) and monitoring/management (Ops) better over time. This might not prevent human error, or even code bugs, but DevOps should mean you make fewer mistakes, less often, and learn from them to avoid repeating those issues.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

T-SQL Tuesday #88 – WTF

tsqltuesdayIt’s T-SQL Tuesday time again, and this is a good one. The host this month is Kennie Pontoppidan, who I had the pleasure of running with one morning in Copenhagen. His theme is The daily (database-related) WTF.

You can read about T-SQL Tuesday at tsqltuesday.com and see all the archived invitations. Join in the fun by watching the site, or the #tsql2sday hashtag on Twitter. If you want to catch up on old invites, feel free.

WTF

I’m not sure if I have a great story, but I certainly have had a few “WTF” items in my career. Some of them caused by me, some by others, and often not a good thing.

The one item that I’ll relate here goes back quite some years in history, but it was one that affected me for many years in my career. I do quite a bit of work with DevOps and software development these days, trying to help people build better software.

Part of that is ensuring your code, including your database code, is in a version control system (VCS). I’ve worked in a variety of environments, using different source code control methods, and I have to relate a story that taught me how important it can be to manage code and track the versions in production.

A long time ago I went to work for a small startup. The company had been in business for 5 or 6 years, and had a number of customers that depended on our service. However, our developers and admins were a mess. I was brought in to try and straighten out the the database and infrastructure side of our codebase.

One of the first things I found was that all our stored procedures in the production server were encrypted. I wasn’t sure why, since we hosted our machines, but that wasn’t a big deal.

Until it was.

One day we had an issue on one of our SQL Server 2000 servers (we had two, supposedly identical). In troubleshooting and putting some sample data in both systems for a fake customer, we got different results. Hmmm, not what I wanted to see.

I checked the VCS (SourceSafe at the time) and checked out the code. I then loaded my test data and … got a third, different result. Now I was concerned as this was a production bug that was delaying work for a customer.

This was early in my tenure at the company, so I asked the other DBA for some information. He had been passed over in favor of me by management and wasn’t a great deal of help. He told me that he likely had the source code for server 1 on his machine.

You might start to guess why this is a WTF post. Rather than get upset, I wanted to see the code, which he produced. His version was one one of the servers, but not the other. Where was that code? He thought it was on the lead developer’s machine, who conveniently, was on vacation. We checked, and it wasn’t in his checked out folder structure.

What about our former developers, who had moved out of state? We still had their machines, which were supposedly going to be shipped to them so they could continue working for us. The checkout folder wasn’t right, but we found 2 other folder structures, copies of the source code tree, with different code. At this point, I just needed to fix the issue, and managed to find a copy of the stored procedure that matched my issue, built a correction, and deployed it to both servers. Unencrypted.

Detective Time

At this point, I paused some work that the DBAs were assigned. We spent a day or so pouring over all developer’s machines, finding 6 or 7 different copies of source code (plus our SourceSafe install). Multiple files were different, and we essentially had no idea what was running in production.

Apparently our developers had decided that when they checked in code, they would leave it on their machines. If they wanted a branch, in essence, they’d copy this folder over, make changes, and sometimes copy things back, but mostly just assume they knew what was good, and leave the updates on disk, uncommitted to the VCS.

I couldn’t come up with a rational reason for this. Back then, we didn’t have any sort of integration like SQL Source Control and we checked code in and out of the VCS manually. I had previously set all my machines to remove the code from my local disk, ensuring I’d go to the VCS to get code when I needed to change something. I suspected this was one way of creating ”job security” by a few developers and DBAs. Hint: it didn’t work with me.

Solving the Issues

This necessitated a complete reboot of our system. Together with a new lead developer (you can guess where the old one went), we paused development for 3-4 days. I found a routine to decrypt SQL Server 2000 stored procedures, and we all spent time decrypting the stored procedures from both instances and committing them back to a new tree in SourceSafe. If we had an discrepancies, then we had at least two developers examine the code, merge the differences and commit code.

After a week of lost work, we had a clean source code tree of what was running in production, and both database servers synchronized. From there, we could start new development. Learning from the issues, I also would only deploy changes from the VCS (manually back then), which ensured the developers had an incentive to commit their changes and not just alter the database objects on the development server.

Of course they still did, and we had numerous deployment issues. If only we’d had a few better tools back then for CI and CD in the database world.

Posted in Blog | Tagged , | Leave a comment

GUI or CLI

There’s a debate over on Scott Hanselman’s blog about the GUI v CLI in the comments for his Azure CLI post. It’s a blend of people complaining about one or the other, mixed with a few (I think) rational people that recognize the GUI and CLI are both valuable. In fact, there’s one person that mentions SQL Server and the SSMS GUI that can generate scripts for later use.

I like the command line, but sometimes it’s hard. I don’t remember the syntax for a variety of commands, including various T-SQL ones, because I don’t use them very often. I may find myself depending on Google or BOL for a quick refresher, but often I’ll just use use SSMS if I can to pick some items in the GUI, click Script, and then examine the code before executing it.

I really don’t mind someone wanting to use the GUI the vast majority of the time they’re working, but someone should be comfortable with a CLI interface. If there is a task that need repeating multiple times, then I really want everyone using the GUI. It doesn’t matter how you’ve written the code, executing something from a CLI is much more reliable than having to click around a GUI, consistently and quickly, over and over.

I’d like to think that most computer professionals these days are able to use a CLI, even if they aren’t too comfortable. The growth of PoSh as a wrapper around so many features and functions in the MS stack certainly contributes to this, as well as the showcase of much code as code, not as images from a GUI. Github and collaboration, as well as more and more samples of code should mean that many people are comfortable working with code and executing it from some CLI.

I’m sure there are plenty of exceptions. Someone that works mostly with SQL Server might be happy running code in a query window, but use SSMS for everything else, and maybe not even be aware that they can build things like SQL Agent Jobs from a CLI. I get it, the GUI is quick and easy. I use it for jobs, for Extended Events, and various other tasks where the code is complex and cumbersome. However, if something is easier from a command line, I like using it. Perhaps that’s why my console of choice, ConEmu, is always just a CTRL+~ away.

Steve Jones

 

Posted in Editorial | Tagged | Leave a comment

GUI or CLI

There’s a debate over on Scott Hanselman’s blog about the GUI v CLI in the comments for his Azure CLI post. It’s a blend of people complaining about one or the other, mixed with a few (I think) rational people that recognize the GUI and CLI are both valuable. In fact, there’s one person that mentions SQL Server and the SSMS GUI that can generate scripts for later use.

I like the command line, but sometimes it’s hard. I don’t remember the syntax for a variety of commands, especially PowerShell where I use tab completion constantly. I don’t even remember options for many T-SQL functions, because I don’t use them very often. I may find myself depending on Google or BOL for a quick refresher, but often I’ll just use use SSMS if I can to pick some items in the GUI, click Script, and then examine the code before executing it.

I really don’t mind someone wanting to use the GUI the vast majority of the time they’re working, but someone should be comfortable with a CLI interface. If there is a task that need repeating multiple times, then I really want everyone using the CLI. It doesn’t matter how you’ve written the code, executing something from a CLI is much more reliable than having to click around a GUI, trying to be consistent and quick, over and over. The CLI just works better.

I’d like to think that most computer professionals these days are able to use a CLI, even if they aren’t too comfortable. The growth of PoSh as a wrapper around so many features and functions in the MS stack certainly contributes to this, as well as fact we seee so much code as code, not as images from a GUI. Github and collaboration, as well as lots of code samples in articles should mean that many people are comfortable working with code and executing it from some CLI.

I’m sure there are plenty of exceptions. Someone that works mostly with SQL Server might be happy running code in a query window and use SSMS for everything else. They might not even be aware that they can build things like SQL Agent Jobs from a CLI. I get it, the GUI is quick and easy. I use it for jobs, for Extended Events, and various other tasks where the code is complex and cumbersome. However, if something is easier from a command line, I like using it. Perhaps that’s why my console of choice, ConEmu, is always just a CTRL+~ away.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 3 Comments

Getting the tSQLt Run Adapter working in Visual Studio 2017

Last year I heard about the tSQLt test Adapter for Visual Studio from Ed Elliot. I’ve been wanting to try it, but various items got in the way. Finally I had the chance to play and it worked well in Visual Studio 2015, but I needed it in VS 2017. Fortunately Ed had a tSQLt Run Adapter beta for Visual Studio 2017, but I had a few issues. This is a debugging post.

I downloaded the file and ran setup. Since this can cause issues with VS 2015, I unchecked that box. Unfortunately, I think I messed up my VS 2015 project. No matter, we’ll forge on.

I had a Readyroll project where I was doing some work. In following some of the work at Redgate from a developer, I set up a new test project according to the tutorial. I got through and no tests.

Hmmm.

The .runsettings file is set in the root of my solution, as shown here:

The contents are:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
  <TestRunParameters>
    <Parameter name="TestDatabaseConnectionString" value="Data Source=.\SQL2016;Initial Catalog=PartsUnlimitedDB;Integrated Security=True;" />
    <Parameter name="IncludePath" value="Tests" />
  </TestRunParameters>
</RunSettings>

My local instance is .\SQL2016, a named instance, and I have a PartsUnlimitedDB database on this instance.

Here the file is selected:

When I run all tests, I get this:

I heard from Ed that I needed to have the name “tests” in the filename, so I changed that. Here’s the test and the file name

Now I see my tests in the test explorer. Success!

Just to check a few things, let’s try another file. Here I’ll use a shorter name, though still descriptive.

And again, success.

That felt strange, but some back and forth with Ed showed me that the IncludeFile filter in the .runsettings file needs to be set to some value. In the default file I used, it’s set to “tests”. If I change it to test, and include a new test, then I things still work:

Posted in Uncategorized | Tagged , | 2 Comments

Using tSQLt to Find Min/Max Times

I love tSQLt. It’s a good way to write tests that can determine if your code is actually working. Since I’m a fan of unit testing, I think using tests to verify your logic is great. What’s excellent with tSQLt is that I can verify a number of cases at once.

I ran across this post asking for help with a query. Given the sample data and results, I wrote this proc and test. In the test, my “Act” is calling a proc I wrote that executes the first post’s query.

CREATE OR ALTER PROCEDURE RunTimeTests
AS
BEGIN
    SELECT
        Taskid,
        MIN(StartTime),
        MAX(EndTime),
        DATEDIFF(MINUTE, MIN(StartTime), MAX(EndTime))
    FROM TimeTests
    GROUP BY Taskid;
END;
GO
EXEC tsqlt.NewTestClass @ClassName = N'tTimeTests'
GO
CREATE OR ALTER PROCEDURE [tTimeTests].[test calculation min max time from timetests]
AS
BEGIN
    -- assemble
    EXEC tsqlt.FakeTable @TableName = N'TimeTests', @SchemaName = N'dbo'

    INSERT into TimeTests
        VALUES 
        (1, '2017-02-23 09:48:47.413',NULL ),
        (1, '2017-02-23 09:50:47.413', '2017-02-23 10:59:47.413' ),
        (1, '2017-02-23 09:49:47.413',Null ),
        (2, '2017-02-23 10:40:47.413','2017-02-23 11:55:47.413' ),
        (2, '2017-02-23 10:39:47.413', NULL ),
        (2, '2017-02-23 10:11:47.413','2017-02-23 11:30:47.413')

    CREATE TABLE tTimeTests.Expected
    ( taskid INT, Mindtime DATETIME2(3), maxtime DATETIME2(3), Minutes int)

    INSERT tTimeTests.Expected
     VALUES (1, '2017-02-23 09:48:47.413', '2017-02-23 10:59:47.413', 71)
          , (2, '2017-02-23 10:39:47.413', '2017-02-23 11:55:47.413', 76)   

    SELECT *
     INTO tTimeTests.Actual
      FROM tTimeTests.Expected
      WHERE 1 = 0;
    -- act
    INSERT tTimeTests.Actual EXEC RunTimeTests;

    -- assert
    EXEC tsqlt.AssertEqualsTable
     @Expected = N'tTimeTests.Expected', @Actual = N'tTimeTests.Actual', @Message = N'Incorrect times'
    
END

When I run this, it easily verifies the answer that the data is incorrect from the poster.

2017-02-24 13_08_32-SQL Test - Microsoft SQL Server Management Studio

If I change my expected results:

    INSERT tTimeTests.Expected
     VALUES (1, '2017-02-23 09:48:47.413', '2017-02-23 10:59:47.413', 71)
          , (2, '2017-02-23 10:11:47.413', '2017-02-23 11:55:47.413', 104)

and re-run the test, it succeeds.

2017-02-24 13_09_53-SQL Test - Microsoft SQL Server Management Studio

Now, does this mean the developer wouldn’t make this mistake? After all, if you think you should be getting those results, you will struggle with the query.

It doesn’t help there. However, it does help if you modify this code later and start to have strange results. This also means that I can add in more rows to the data, even more cases, and determine if the procedure still works. If I’m trying to cover a dozen cases, it’s much easier to re-run a tSQLt test than manually looking through results.

Give tsqlt a try. It’s free, and if you have the SQL Toolbelt, you can get a GUI with SQL Test for executing your tests.

Posted in Blog | Tagged , , | Leave a comment