The Foundry and Data Masking

There’s a group at Redgate that investigates new ideas and products. They’re called the Foundry, and they do some cool things. One of those is work on data masking. They’ve got a whole section on the Redgate site. Check it out and see what you think.

I’ve seen some of their early work in other areas over the years, and it can be interesting to think about future ideas. Some of our products have come out of research, so I’m always looking to see what they’re up to.

I think data masking is a very useful idea, and you can check out some of what they’re thinking now. It will be interesting to see what comes out of this.

Posted in Blog | Tagged , | Leave a comment

Moving Through Five Years

I wrote the Five Year Plan in mid 2013. In it, I noted there was a prediction that IT departments wouldn’t exist in 5 years, meaning in mid 2018. That’s a year and a half away. Is that a possibility?

I don’t think so. The more I work in the technology world, the more I see a need for humans to help manage the systems and data. The systems are complex, the small details of getting a platform up and running are varied and not standardized across any two companies, and I can’t envision a complete self-service world. As easy as the Azure or AWS consoles can be, the mindset of those platforms still expects a technical person to choose options and provision systems. After all, how many of your non-technical friends understand what geo-redundancy is?

It doesn’t seem that IT departments are really shrinking. As I look through various surveys, employment statistics and predictions, it seems that most all positions in IT are still growing and hiring. The outlook for the next few years is still good and the pay is still rising overall. What does that mean for all the DevOps, self service, and BYOD vendor support that hint at less jobs for many administrators?

I suspect that there are trends at some companies, where mundane, less skilled, easy-to-automate jobs are being replaced by automation. Some companies may even eliminate certain jobs, like the Database Administrator, but they don’t really eliminate people. Those individuals that can learn to handle other work, and become more efficient still keep their jobs, albeit with different titles. Some work may get handled by systems, but much of the work just gets distributed to other staff as a part of their jobs.

I’ve seen this in software development at companies that eliminated testers. Developers and operations staff start to become responsible for different aspects of testing. Each person spends a little time testing, in addition to their other work. Everyone ends up doing a little less of what they used to to, but a little more of something new. This also usually results in a larger development staff to cover the work the testers used to do. Often this means the department remains the same size, some testers become junior developers, and we’ve moved work around. The shared responsibility might actually improve overall quality since the impact of poor code gets noticed by more people.

I think this is what will happen with many operational IT staffs. Perhaps some companies will try to eliminate the IT department, but really just move the staff to different departments, changing the reporting structure, perhaps expand some of the responsibilities of people, but they’ll likely still have the same number of “IT staff”, even if they don’t call it that.

This doesn’t mean that each of us should count on gainful employment at our organization until we retire. Most of us should constantly get better at our jobs, and learn more about technology. I would recommend you learn new skills, but constantly and regularly practice and polish your old ones. Become better at your craft, even as you might choose to grow your career in new ways.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 4 Comments

Opening the PowerShell ISE from Explorer

This is a cool productivity trip, and one that I ran into by accident. I had heard that I could launch a command prompt by typing cmd in the address. That works, and it’s cool. It even works with ConEmu, which is my default command window.

However, while I was looking for other hints, I found that Powershell_ISE also works. That will launch the Powershell_ISE editor with the current folder as the default one in the lower pane. The “open” dialog, is still a the previous location, but I’m not sure how easy that is to change.

In any case, here’s how this works. Browse to a folder in Exporer, such as a GitHub repo I have:

2017-01-06 12_13_47-Load-SQL-Saturday-Sessions

Type “powershell_ise”

2017-01-06 12_13_56-Load-SQL-Saturday-Sessions

Hit Enter, and the ISE appears. Note the path in the lower pane.

2017-01-06 12_14_18-Windows PowerShell ISE

That’s it.

Quick, easy, and if you want to get to the ISE quickly to do something, or write some code, this is one way to do from a folder.

Posted in Blog | Tagged , | 2 Comments

Why Devops? For Better Security

The ideas of DevOps are a mixture of principles, ideas, recommendations, tools, processes, attitudes, and more. There isn’t any  one way to implement a DevOps process, and plenty of people have been working in what many would consider a DevOps environment without calling it that. I really like Donovan Brown’s definition: “DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”

That sums it up nicely, but what are some of the “value” items that we can deliver to our customers? Today I want to discuss one of these: security.

The historical view of a secure system is one that gets secured, rarely changes, and every change allowed is reviewed to ensure no mistakes are made. That view fits fine in a DevOps software pipeline, well, except the rare part. Does that make a DevOps built application less secure? Let’s turn that around. Is a traditional (waterfall, agile, etc.) application more secure because of the limitations?

I’d argue it’s not. One of the issues with security is that the issues, holes, and vulnerabilities constantly change. What was secure last week might not be secure this week. In traditional applications we find one of two things. Since deployments are relatively rare, security problems often remain un-patched for long periods of time, or they are patched quickly by changes to production systems that are not well tested or evaluated. There are countless tales of changes made to production applications that end up breaking the system and must be removed. The result, a less secure system. This can be especially problematic when dependent software, for example the OpenSSL issues, is not patched because there are so many dependencies that no one is willing to change the system for fear of causing downtime.

In a mature DevOps environment, the system is better understood because the software is regularly built, testing is automated, and there are regular deployments to various downstream environments. Security patches can be incorporated and deployed quickly, enabling the ability of our automated testing process and intermediate environments to look for potential issues. With a regular branching strategy, we can even quickly suspend current development and focus on producing a patch or changing other code to ensure a successful deployment. Because we practice regular deployments, the need for un-tested, cowboy code changes in production is eliminated.

Certainly a DevOps process doesn’t preclude making mistakes. It doesn’t ensure developers or administrators won’t create vulnerabilities (intentional or accidental). DevOps doesn’t prevent mistakes. DevOps does ask us to continually learn and get feedback from our efforts. And it asks that we incorporate that feedback into our process. If we find a problem in how we write code, a test missed, or a problem in deployment, we correct that in our automated process to prevent it happening again. And since every task, every build, every deployment is logged, we can audit everyone’s actions. DevOps certainly encourages more security, though not perfect security. The goal is that a DevOps process gets us a little better security every time we learn something.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 5.3MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Let NonAdmins Get Logins

I saw a question recently about how to allow some users to see the logins on a server without being a sysadmin or securityadmin. This was in support of a migration effort, so users needed read rights without being able to change anything.

This appears to be a good place to use the WITH EXECUTE AS option for a stored procedure. I decided to try. I have a normal, non privileged user, JoeDBA, with rights to connect to my Sandbox database. I decided to create a proc as a sysadmin user.

Here is my code. Notice the WITH EXECUTE AS option. Since I created this with my sjones, sysadmin, account. This should be able to call into master and get the data.

CREATE PROCEDURE GetLogins_NonSA
WITH EXECUTE AS OWNER
/*
Description:

Changes:
Date       Who         Notes
———- —         —————————————————
1/24/2017  PLATO\Steve Initial proc to get server logins
*/
AS
BEGIN
SELECT name
     , principal_id
     , sid
     , type
     , type_desc
     , is_disabled
     , default_database_name
FROM master.sys.server_principals

RETURN
END
GO

GRANT EXECUTE ON GetLogins_NonSA TO MigrationRole
go
ALTER ROLE MigrationRole ADD MEMBER JoeDBA

Now, I can log in with a low privileged user. By default, a query against sys.server_principals should only return my login. If I query the DMV, I get this:

2017-01-24 10_08_56-SQLQuery2.sql - (local)_SQL2016.sandbox (JoeDev (71))_ - Microsoft SQL Server Ma

Now, I can execute the procedure. I’ve scrolled the results a bit, but you can see I view other users.

2017-01-24 10_09_30-SQLQuery2.sql - (local)_SQL2016.sandbox (JoeDev (71))_ - Microsoft SQL Server Ma

Certainly I could limit the columns returned, or transform them to provide more data, but this is a good way to give read-only access to a login about other logins.

Posted in Blog | Tagged , , | Leave a comment

Backups Aren’t Backups Until a Restore Is Made

One of the interesting things I saw in the recent GitLab outage and data loss was the fact that none of their backups were available. They use PostgreSQL and I’m not familiar with the ways in which the modern PostgreSQL engine handles backups or the options you have, so I’m not knocking either GitLab or PostgreSQL. It’s possible one or the other had fewer options than we do with SQL Server with our full, differential, log, and filegroup backups, all during live database activity.

There was a live stream and a Google Doc open during the incident, showing the response by their employees (and plenty of Hacker News comments). Kudos to GitLab for their bravery and transparency in showcasing their mistakes and choices. I’ve been in similar situations, and the war room can be chaotic and stressful. There have been no shortage of times when someone makes a mistake under pressure and we scramble to recover from the damage. I’ve made those mistakes and understand how they happen when you get desperate and are tired. This is one reason I’ve usually insisted that when an incident is declared, I immediately send at least one person home to rest. I never know what time I’ll need to get them back.

In reading the notes, there are a number of issues. One of the respondents doesn’t know where the once a day backups are stored (1). The location they check has files only a few bytes in size, so backups might not be working (2). No disk snapshots in their Azure space for database servers (3), though the NFS servers get them. The snapshot process is incomplete, in that once snapshots are made, some data is removed from production, and will be lost in this recovery (4). The backups to S3 don’t work (5). All of this results in a backup that is six hours old being restored. For people that commit code often, this could be a lot of data. Hopefully there weren’t too many merges and branch deletions in this time for customers.

A backup doesn’t matter. A restore matters. It doesn’t matter what backup process you have, if you don’t test it, then you don’t know if you can recover. In fact, with databases (really any system), you need to test the restores regularly because the backup process can fail. I learned this early in my career when one of our admins realized his fancy tape changer that let him only change tapes once a week was broken. The drive had stopped writing and he never noticed.

Not only is it important to monitor that the backup process runs, it’s important to ensure the backup files exist, where we expect them to exist. If this is a remote location, you need monitoring there as well. It’s also important to restore backups regularly. Ideally you’d test every one, but at least get a regular rotation of testing once a week to ensure your process is working.

If you don’t, then you risk not only data loss, as GitLab experienced, but an RGE. That’s a resume generating event, and it’s something none of us would like to experience.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 5.9MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

A View of the Cloud

Amazon has been an amazing digital company in the last twenty years. I remember making my first order from them, unsure of whether the online bookstore would work better than browsing locally, or if I’d even get my books. I watched them transform into a great shipping company that could sell anything, to a digital reading company with the Kindle, and even an amazing cloud hosting provider with world class programming platforms. At each stage, Amazon has grown to become even more efficient and seductive, slowly gaining more of my business over the years.

Whether you like the business model and practices of Amazon or not, part of the reason Amazon has become a successful company is that they have an incredible software development process and great developers, both of which have produced a software stack that is very impressive. Apart from their web site, which is impressive, they host their Amazon Web Services, which is a dizzying array of services that can be purchased by anyone, at any time, and get some software up and running quickly. Even if you look at their free tier, it’s very impressive in the number and type of services, including database access.

I ran across a piece on the 10 year anniversary of AWS that contains some software development lessons from Werner Vogels, CTO of Amazon. It’s a good list, and while some might not apply to your particular environment in the same way, it’s a good list of principles to follow. Certainly planning to evolve and to handle failures is something that most of us need to consider. Including security from the ground up, however, is one thing that many of us know, but don’t have good patterns, practices, or habits to follow. Even those companies that work in security conscious areas often leave security concerns until they are well into, or even finished, with their core software development.

The one area that I think is going to become more important for all of us in the future is understanding resource usage. Certainly we’ve needed to know some baselines and expected growth in the past to plan for upgrades and future needs. However, with the move to VMs and cloud services, how many of us really know what resource usage we would require for peak workloads and when. I suspect most people don’t really know how to map their instance resource usage across different virtual systems, much less evaluate whether their system would be cost effective in a cloud scenario. A better understanding of what resources we really use is something that more and more organizations are going to expect from their technical professionals. I only hope we get some tools to help us here as this is a complex topic.

The move to cloud services is going to happen for many of us. We might not move all our data, or even much of it, but as we experiment with new platforms, as companies work to avoid capital expenditures, and as the platforms and services mature, more of us will end up with some cloud platform in a portion of our organization. During your journey, or even while you still work with on-premise systems, I’d keep the principles in Mr. Vogels’ article in mind. They’ll help you to build better software in the future.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.9MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Get Your SQL Server Product Key with Powershell #dbatools

There are lots of tasks that are easily accomplished with bot h PowerShell and T-SQL.

Get-SqlServerKey is a command that will find your product keys. It’s not often you might need this, but it’s a pain to track them down from the registry and be sure you get them for all your instances.

Here’s the view from one of my development machines:

2017-01-27 15_31_08-powershell

There is a limitation here that appears to have issues with SQL Server 2016, but it’s still helpful for other versions. You can also specify a list of servers in a file for this to use, as well as using credentials to connect.

A quick, handy item, and one that should make auditing (pre-2016) SQL Servers easy.

Posted in Blog | Tagged , , | 4 Comments

Your Favorite Feature that Needs Work

The SQL Server platform has grown tremendously since I started working with SQL Server v4.2. We’ve had various subsystems added, lots of enhancements, and even new languages. There have also been a few parts of the platform that have been discontinued, such as Notification Services and English Query. There are items that are deprecated, though far fewer that have actually been discontinued from the product. I’m not sure if I think that’s better or worse for all of us.

Apart from the deprecated, discontinued, and removed features, there are also lots of features in the platform that are stale. They receive very little development, and perhaps are very lightly featured on the What’s New pages. Or they never get any development from version to version.

This week, I’m curious what features you use or like that need help. SQL Server v.Next is in development, and while I don’t think major changes will be made between now and RTM, perhaps there are improvements that could be made in some areas. I’m sure there are a few SQL Server developers at Microsoft that have time to ask for a few more pull requests from their particular area of development. With that in mind…

What feature in SQL Server do you really like and wish would be improved in the future?

Your idea would be large or small. Perhaps you have some language enhancement. Maybe you run setup regularly and there is a change that might streamline that process. It may be that replication conflict resolution is a constant problem for you and you’d like a few development hours spent on this. It could be anything that you use, or would like to use. Perhaps you have an idea for an improvement, or maybe you just want something done. Let us know today.

The place to log bugs or feature requests is at the SQL Server Connect site. Post your idea there, then put a link in the discussion and make a case for why people should vote for it. Who knows? Perhaps you’ll get some improvement for the feature you like in the next version of SQL Server.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 1 Comment

Using a DMK in TDE

When you setup TDE, you need to create a DMK (Database Master Key) in the master database if one doesn’t exist. This key forms the basis of a chain of protection for your data inside of the SQL Server instance. The hierarchy is:

  • The DMK exists in the master database. This protects a certificate or asymmetric key by encrypting it.
  • The certificate in turn encrypts the DEK in a database using TDE. This certificate is required to open the DEK in the database, or open the DEK during a restore operation.
  • The DEK encrypts the data.

There is only one DMK in each database, so the DMK in master for TDE could be used to protect multiple certificates, asymmetric keys, or symmetric keys that exist in master for use by TDE or any other encryption mechanism.

A particular DMK is not needed to restore a TDE database on another instance. Rather the new instance of SQL Server should have a DMK created in the master database. Then the backup of the certificate is restored on the other instance, protected by the DMK there. Once this is complete, the TDE database can be restored.

If you’d like to know more about what a DMK is, I’ve written about it in a previous post.

Posted in Blog | Tagged , , , | Leave a comment