Scripting Tables with dbatools

I really like the dbatools project. This is a series of PowerShell cmdlets that are built by the community and incredibly useful for migrations between SQL Servers, but also for various administrative actions. I have a short series on these items.

I’ve been experimenting with scripting some tables, and I wanted to check to see if dbatools made that easier. I was amazed by the number of new cmdlets in the project, and after hunting around, I found a couple cmdlets: Get-DbaTable and Export-DbaScript. I decided to see how these work.

My first experiment was to run Get-DbaTable. This takes an instance and database as parameters and returns a set of table objects. I get space and size information and some metadata. As a short look, here’s one table.

2018-03-22 14_26_18-cmd - powershell (Admin)

I decided to use this as input to Export-DbaScript to see what happened. I used this command:

Get-DbaTable -SqlInstance Plato\SQL2016 -Database NBA | Export-DbaScript

This gave me some output. It wasn’t quite what I wanted, but it worked.

2018-03-22 14_27_38-cmd - powershell (Admin)

Note that all of the tables were exported to the same file. If I opened that file, here’s a snippet of the code. I get a header, and then each table’s code put together as single batch.

2018-03-22 14_29_06-Plato$SQL2016-Table-Export-03222018142703.sql - (local)_SQL2016.WideWorldImporte

Not quite what I wanted.

I can determine the file, using the –Path parameter. This still gets me one file, but I can make this better. I’ll make a folder and change to that folder in my PoSh window. Now I can run this, using ForEach-Object to iterate over the tables, outputting each to a file.

Get-DbaTable -SqlInstance Plato\SQL2016 -Database NBA | ForEach-Object { Export-DbaScript -InputObject $_ -Path ($_.Name + “.sql”) }

When I do that, I get this:

2018-03-22 14_50_27-Tables

Each table’s code is now in a separate file.

That’s nice, and with filtering, I could have this generate scripts for specific objects, or types of objects and update them from PoSh, likely a job or task, relatively easily.

Certainly the Generate Scripts process in SSMS is easier than writing this script, but that’s not programmable. This wraps around the SMO scripting objects and makes the process much easier.

If you haven’t tried dbatools, give it a go. You’ll appreciate the ease of scripting and might start to enjoy working with SQL Server in PoSh.

Posted in Blog | Tagged , , | Leave a comment

Ad Hoc Logs

A long time ago I started working in a team of people as a general systems administrator. I worked in a team of six people managing a large, 1000+ node network with a number of servers. In my first exposure to SQL Server, we had a very unstable application that forced us to provide 24×7 support. With a couple of contractors, we had to ensure someone was on-site all the time, often working with Microsoft and our corporate developers to stabilize the applications. We were constantly trying new techniques to fix our application, and with staff stretched thin, we struggled to understand what might have happened in the previous 12 hours when we reported for work. At the time, I suggested leaving a text document on each server’s desktop, updated with a note for each change.

That worked well and I brought that technique with me to future positions. In another job, we constantly remotely connected to systems, and having a standard file on the desktop was helpful. As we became more security conscious, and stopped using shared logins, we moved our logging to Exchange public folders. Every action taken by an admin needed to be cut and pasted into a new post. That wasn’t a perfect system, but we built habits over time and we had an audit trail that helped us in understanding the changes we’ve made and assisted in troubleshooting.

Today I’m curious. I want to ask the question about the data corrections, those quick changes, those fixes that get production working. Do you log everything? Is there some system in place to ensure you know what’s happening?

I’ve been wondering about this and thinking hard as the date for GDPR enforcement approaches. One of the items that I’d glean from the law’s text is that any change to correct data, any quick fix made, needs to have an audit trail. We need to prove that we know who, when, and why this change occurred. This is especially important if a data subject requested some correction. You’ll need to prove you actually performed the action.

I’ve never worked anywhere that some admin (including myself) completely avoided connecting to a production machine and making some change. Sometimes we’ve had great auditing, often not, but ad hoc fixes and changes, especially in the heat of an issue, are a fact of life. I’ve learned to deal with it and try to build lightweight habits to help me capture those changes.

Let me know today. How bulletproof is your auditing?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

I learned about the order of logical operations #SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I had logic in a CS curriculum many years ago and I’ve worked with AND and OR statements for years. I’ve sometimes confused myself, but I usually ensure I have parenthesis included to clarify the code. Not just for me, but for anyone that might glance at the code later.

As a side note, I also try to format code so a quick glance can reveal what happens.

However, I learned something new this week. I saw a question about the order of logical operations in this form: a or b and c.

I had somewhat assumed, like math, we’d use a left to write evaluation. However, that’s not correct. Look at this snippet:

2018-03-22 10_06_22-SQLQuery1.sql - (local)_SQL2014.SimpleTalk_1_Development (PLATO_Steve (57))_ - M

If we went left to write, we’d have two rows from the OR (n=1, n=2) and then an AND that produces no rows. So no results?

That’s not correct. According to BOL for OR,  the AND operations occur first. So n=2 AND n > 3 occurs, with 0 rows. Then the OR with n=1 is evaluated to return 1 row.


At least to me. I’ve never thought because I’d write

WHERE (n = 1 OR n = 2) AND n > 3


WHERE n = 1 OR (n = 2 AND n > 3)

and be sure that what I wanted to occur would occur.

A quick lesson. While it’s good to know what the order or evaluation is for your platform, don’t count on this. If there is a chance for confusion or unintended consequences, use parenthesis. It’s simpler and easier, and I might argue, more elegant.

Posted in Blog | Tagged , , | 2 Comments

More SQL Server Updates

This was a busy week for updates. I finished adding new builds to the build lists for 20172016, and 2014, the first time I’ve done all three platforms on the same day. To be fair, two updates were on the 19th and one on the 20th, but that’s pretty close to the same day. In the past I’ve had updates for different versions during different months.

That will not be the case in the future. SQL Server 2017 is on the CU model where we get monthly updates to the product. No more service packs for this version. If you hadn’t heard that, be ready for the new servicing model that consists of just the monthly updates. I wasn’t sure if I liked this at first, but I do appreciate that we don’t get confused with trying to determine which CU1 is needed as there were different patches for RTM, SP1, SP2, etc. Now I’ve gotten used to expecting an update each month and applying them after a few days. So far, Microsoft has mostly been sticking to monthly releases, though the December release got delayed into January. With holidays, that’s expected.

This week also saw the update of SSMS to v 17.6. You can freely use this version and download the full install or just the update package if you are running 17.x. We are seeing updates on a fairly regular basis to the tool, every couple of months. I’m hoping we continue to see more fixes and improvements aside from new functionality as SSMS has been neglected for too long. I don’t know how stable this version is, but 17.4 was on my machines for a long time without an ill effects, and I think this seems fine for now.

Updates have always been a part of software, but often they are fairly rare, with interruptions occurring just a few times a year. That’s changing. Redgate releases software every week, with almost every product updating once or twice a month. For someone that uses relatively few pieces of software, all of a sudden it seems like I have a regular set of tools to update across my SQL Server development platform between SQL Server, SSMS, VS, Redgate tools and more.

That’s good, as I get fixes and increased functionality, but across a set of tools, I might find myself updating something every day. While the software often updates itself if I click OK, I have to make a decision and the process is disruptive and time consuming, especially when I run into required updates, which happens more than I’d like. I like the idea of DevOps, but one of the unintended consequences is that if everyone does it, there are a lot of changes and updates for users to deal with.

I don’t have a solution, but I do find myself delaying updates more and more, often moving to only take every 3 or 4 updates. Maybe if everyone had an easy way to apply patches with PoSh across systems, or smoother built-in mechanisms I’d feel differently, but for now I probably click “skip” more often than “apply”.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Interviews: Covering index

Performance seems to be a part of every DBAs job. It should be a part of every developer’s job, but I understand the focus to build software is different and many developers aren’t sure how much a feature or option they work on will be used. They don’t think too much about performance because at small scales, perhaps performance doesn’t matter a lot these days with multiple cores and many GB of RAM. I’d disagree with that, but I understand the perspective.

What’s interesting to me is that in many interviews I’ve had over the years, I’ve not often been quizzed on specific scenarios that might help improve performance for queries. It seems many interviewers like to ask trivia questions such as the difference between a clustered and non-clustered index or what configuration knobs should I change. Code questions might be how to find duplicates in a table or deal with NULLs in aggregates.

I hope that’s changing and more interviewers are giving candidates scenarios, and looking for ways to ensure that candidates think about performance. I hope someone asks how a candidate looks at an execution plan or structures code to remove RBAR concerns. I wondered about this since I chatted with someone recently that had been asked about covering indexes in an interview and was struggling to understand how a covering index might be different than a clustered, nonclustered, filtered, or other type of index.

This might seem like a trick question, and perhaps it is for some interviewers. After all, if you don’t know, you might correctly guess what this means, but will you know this isn’t a special type of index? Perhaps you’ll try to bluff your way through the answer and hope you get close.

Answering interview questions should be a discussion, as your answer can reveal as much about how you work as it can about your knowledge. Do you probe to understand the boundaries of the question or confirm that you know what’s being asked? Do you admit when you don’t know something or perhaps reason out loud? Do you give details that make sense, or do you launch into a lecture?

In this case, you should understand that this index covers a query. This isn’t a type of index, but rather a characteristic of an index against a particular query (or set of queries). To learn more, you might want to read a bit about adding included columns or how to evaluate your choice. If you don’t know much about indexing, get started with an introduction or go through our Stairway to Indexing.

Future versions of SQL Server will include automatic index tuning, but I think we’ll always need to balance the number of indexes we have. We’ll also often want to build some indexes that will ensure the system works fairly well when it’s first installed. That means data professionals (DBAs ans developers) need to understand how indexes work and ensure they can choose a good index most of the time.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Are You a Traffic Cop?

There’s a joke that a DBA’s favorite word is “no”. I think that was certainly true for the individual from whom I first learned about Oracle and SQL Server administration. He constantly pushed back on developers and management who requested changes. He wasn’t rude or abrasive, though I’ve met my share of those individuals. Instead, he patiently would explain and debate the changes and his reasons for not wanting to make them. He had come onboard after a particularly bad deployment affected a significant portion of the company and while management listened to him, development wasn’t pleased.

One of the analogies I’ve seen for DBAs is that they are a traffic cop. They implement the rules necessary for the safety, protection, and availability of databases systems. This is sometime at odds with the pressure developers feel to implement new features and functions in their applications that require a database. This can, and often is, resolved by using some methodology like DevOps, where the DBA, the developers, and operations staff discuss changes in advance and collaborate to ensure that functionality will not negatively impact data quality, performance, or availability.

However, this can easily take a negative turn. People often have short memories, and we are all usually busy at work. When a DBA lets a change go through that causes problems, they are reluctant to let the next one go without a thorough review. They slow down and want to double check every new line of code, or worse, they start to second guess their own decisions to allow changes, which can further exacerbate the issues.

Are you a traffic cop as a DBA? I saw this referenced as a great analogy recently, where the DBA enforces rules, perhaps taking on a bit too much power, and using these rules to limit change. They lock down too much access, even preventing metrics and data from providing feedback to developers. They may even seek to strictly enforce security by the letter of the rule, not the spirit, upsetting and angering others.

I think good security is important, and often bypassing security for the sake of convenience or expediency is a source of issues, including data breaches. There are policies that are outdated or may not apply in specific situations, and I’d say the sign of a good DBA is that they know when an exception is needed, they document and manage the process, and may even seek to have the guidance changed. They work with others to get things done, constantly looking to push work out to customers, while protecting systems. They may slow work slightly, but a good DBA works with everyone to minimize delays.

If you’re a traffic cop, I’d argue your employment days are likely numbered. The world is changing, and while security grows more important, the ability to get work done securely and quickly, while working with others, will become more important over time.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

If only the US would follow …

I saw this blog from Allan Hirt and I wish the US would adopt something strong like this. I’d actually like this to apply to all organizations, but certainly critical services need to be secure. If you follow the link, you’ll see that the UK government has warned their critical industries that if they do not have effective measures or safeguards against cyber attacks, they can be fined up to £17 million. That might not seem like a lot in some industries, but it should get some attention from executives. I’m not sure how many CxOs would keep their jobs if they incur that level of fine because they didn’t implement strong security measures.

For now the requirements apply to the energy, transport, water, and health industries. These are deemed essential by the UK government. The UK government is expecting that along with data privacy changes to ensure GDPR compliance, that these industries need to implement better cyber security to prevent or limit attacks. This is part of guidance from the European Parliment, and it’s overdue. I just wish the US were as focused on pushing organizations to adopt security as a priority rather than an afterthought.

Not that I want government to dictate specifics, but I do think that having a government authority that can stay up to date and evolve their view of what constitutes good security is a good idea. This could be similar to some sort of review and feedback situation that we have for auditing. Ultimately, I’d like there to be some group that can weigh in on good security practices for platforms and systems, probably with research and industry feedback, on what constitutes valid patch levels for systems and software. It would be valuable to know that your version of Windows or Debian or PHP or the database platform is insecure. Not that I want to create more of an upgrade treadmill, but using software means patching it.

Perhaps this would drive more organizations to move to open source software, or perhaps more vendors to issue patches rapidly and lower their prices to compete. Maybe more importantly, it would press vendors of third party software to ensure they continue to develop security patches, perhaps even spelling out support lengths in contracts. The pressure to perform better would be useful in raising quality in the security area. One thing to note is that the intent isn’t to fine companies, but ask them to make valid risk assessments and take appropriate measures (with input from regulators).

For now, I’d say that most UK organizations ought to start taking security more seriously. Making changes in platforms to prevent attacks and limit downtime will require some planning and foresight. You might not be in an industry affected today, but in two or three years that could change.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.9MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 1 Comment

A New SQL Server Edition

I first heard about this edition of SQL Server over a year ago, and at the time I wasn’t sure there was much of a market for it. Microsoft disagreed, and they moved forward, putting this in preview and then finally releasing it this past week. Now you can order and set up an Azure SQL Database Managed Instance. According to Microsoft, this is an edition that is near 100% compatible with on-premises instances. The difference is this is a PaaS option that removes much of the administrative burden for patching, backup, HA, etc.

This seems like a VM that Microsoft manages and you connect to, except this isn’t really a VM. You do not have any access to the host, but you do get the option to have multiple databases and most of the features that don’t exist in Azure SQL Database are here. Keep in mind that near 100% is not 100% and there are limitations, quite a few, which may limit your ability to lift and shift your particular application.

Brent Ozar Unlimited tackled this feature first, putting together quite a few blog posts on the topic. The first one notes that a new instance might take 24 hours to deploy (in the cloud??), and theirs did. In that post, they also discuss the networking, which isn’t necessarily hard, but it’s not as simple as most DBAs and developers are used to. In fact, while reading the post, I recalled memories of my time as a network admin, learning how to migrate from IPX to IP based addressing. For those of you not used to networking terms and configuration, you might want to read about this topic from the experts.

The Azure fabric is managing all the HA behind the scenes, and I’m guessing they’ve got some advanced featuers that might end up in the box product at some point. It seems from reading Brent and Erik’s experiments that you really get a dedicated instance with some CPUs and schedulers turned off it you don’t provision the largest instance. There are new objects inside the instance, which again, might get into the box at some point.

Are Managed Instances for you? Perhaps. Removing this administrative issue might make financial sense for your organization. Not for you, but when you factor in taxes and staff to perform tedious actions, maybe this works. Certainly you should understand if you have an option here for any applications and then present this as an alternative to the business and let them decide if this makes sense. I’m not sold, but I do like the idea of this as a new cloud option. We have public, government, private, hybrid, IaaS, Paas, and SaaS options. This slips somewhere between hybrid and PaaS, giving us more choice, which I do think will work for some organizations.

I don’t know how these will perform, and really none of us do outside of Microsoft. They’ve been testing, and I hope this will work well, failover cleanly, and not be so confusing that customers find themselves misconfiguring networking. However, if you’re using parts of Azure, have AAD implemented, and want to get new instances provisioned that look mostly like what you have in your data center, take a look. Hopefully the provisioning process will smooth out and speed up, since 24 hour notice might be great for some large corporations, but it certainly doesn’t feel like the cloud to me.

Steve Jones

Posted in Editorial | Tagged , | Leave a comment

The 700 Milestone

I’ve watched various SQL Saturday milestones go by. I missed 100 in Brazil, but did manage to get to 200 in Philadelphia and 300 in Kansas City. 400 was in Torino, and I am sad about not making the effort to get there. 500 was Boston, but I had a conflict. 600 Chicago, but again, other plans.

Colorado Springs was awarded 700, and I had to submit ASAP to that one. I’ve got a morning commitment for volleyball, but I’ll be racing down to get to the event and attend the after party.

I’ve got a repeat presentation, Branding Yourself for a Dream Job, but there are some other great talks. PowerBI from TJay Belt and Meagan Longoria, Optimizer secrets from Brian Hansen, Data Science from Reeves Smith and more.

Plus there are two great pre-cons on Friday. The Midnight DBAs talking Enterprise Maintenance and Kevin Kline giving you 50 things you need to know.

If you’re nearby, get a free day of training and say hi in the afternoon. If you can convince the boss, I’d say you should take advantage of one of these pre-cons as well.

Posted in Blog | Tagged , , | 2 Comments

Great Developers Use Source Control

I was rewatching Ike Ellis (b|t) talk on the habits of Great SQL Developers from SQL in the City 2018, and his first item was “Use Source Control“. I happen to agree with Ike, which is why I’m writing this, and I really hope you do as well. Certainly at Redgate we’ve built tools that help you get your T-SQL code into a VCS, but whether you want to work manually, use someone else’s tool, use our SQL Source Control or ReadyRoll, I’d ask that you consider getting all your code into some sort of VCS (Version Control System).

Ike notes that if viewers did this one thing, he’d be thrilled. I agree. Please, learn to use version control. If you wonder why, listen to Ike’s talk. He relates a story that notes that using source control doesn’t make you a better developer, but that better developers do use a VCS. This is a habit that helps build better habits and is a step on the journey to you becoming a great developer.

Does it help? Well, I think it does in some sense. Developers that use a VCS often build a habit of checking in changes before they try something that might be problematic. They also in a more integrated fashion with their work, and easily rollback problematic code without wasting time (or focus) trying to undo something. They get a previous version back and move forward.

I do think that this one thing changes the way you view code, and it provides you with a safety net. This is one of those skills that I’d really recommend you learning, as it will pay back it’s value tremendously over time as you learn to depend on the VCS and stop doing things like keeping multiple objects or files around, and trying to sort out what code is where. As you work with others, or even with your past self, you’ll learn to include better comments that help you change focus quickly and understand the particular reason behind a version of code. This will help you learn to be a better developer.

There are numerous ways to get your T-SQL code into a VCS. There are tools, but there are plenty of PoSh or other scripting methods. In fact, every DBA should get in the habit of scripting out instance level objects (which most tools don’t handle). Store them away, and then repeat as you need to make changes. You might be surprised how often you’ll be glad you have the previous version of a job, a schedule, a linked server, or more. I learned to keep all my scripts, from replication to running a quick report for a business user, in a VCS. I’ve never regretted this choice.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment