Lots of Learning at SQL Bits

This year SQLBits is returning to Telford, UK, on April 5-8, 2017. I’ll be there, presenting on Friday, and enjoying the show the rest of the time. If you haven’t ever been to the event, it’s a fantastic, fun, casual event with attendees from all over the world coming to learn, teach, and get excited about SQL Server. The event isn’t the largest SQL Server event, but it’s got the best atmosphere and doesn’t have all the hassles of some other events.

I’ve attended most of the SQL Server conferences in the world, and if I had to choose only one to go to, it would be SQL Bits. The others are good, but SQLBits is my favorite. I’ve been many times, and I’ve watched the event grow over the years. With the venue moving from year to year, it’s also a chance to experience different venues and locations in the UK.

One of the neat things about SQLBits is that there are a mix of different training on different days. The event started with one day of pre-con learning, a paid training day on Friday with more technical sessions, and a free day on Saturday. This has grown to two full training days of pre-con training, and if you’re looking for a good deal on learning a new technology, you should come spend a day on Wednesday, April 5, or Thursday, April 6 with one of the world class instructors.

I don’t get much of a chance to attend classes, but since I’ll be there, I’m hoping to sit in on a class each of these days. There are many to choose from, and fortunately I’ve seen a few so my choice isn’t as hard as yours.    Whether you want to learn HA, Power BI, T-SQL, Text Mining with R, or more, I’m sure you’ll find one or two days worth of valuable training. In fact, if you’re going to make the journey to Telford, you should spend both days in class. Whether it’s directly useful in your job right now or it’s something that interests you, I bet you could find two days of intense training beneficial.

If you make the decision to come soon, you’ll save a bit of money if you register now. The full conference registration will go up on Mar 4, so push your boss to send you today. I’ll be there, and I hope to see a few of you there as well. Be sure to say hi to me if you make the journey to SQLBits in April.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

 

Posted in Editorial | Tagged | Leave a comment

DevOps Webinar Tomorrow

A quick reminder that tomorrow, Feb 21 at 12pm EST, I’ll be hosting another DevOps, Database Lifecycle Management (DLM) webinar. Together with Arneh Eskandari, we’ll show how we can each make changes to our own database, push the changes to git and reconcile merge issues.

Register now, and watch us work together to perform distributed database development.

Posted in Blog | Tagged , , , , | Leave a comment

SQL Clone and Encryption

I’ve been testing and working with SQL Clone for the last few months, trying to help the devs at Redgate ensure this product is ready to go when it’s released. I’ve written a few pieces on this blog, and I’ve got a new one up on the Redgate blog that looks at Clone and TDE.

When I started looking at this, I assumed SQL Clone would work. After all, the product tries to ensure that a clone looks just like any other database to SQL Server. It does, and TDE just works.

I’m still looking at other features, but so far, everything works great. The clone databases are indistinguishable from normal databases, except that I can blow one away and recreate it from the image in seconds. That’s pretty cool, especially when you’ve messed up a bunch of data while writing code that wasn’t quite production ready.

I’m looking forward to seeing how others use SQL Clone and how this will help ensure developers work with larger, known data sets, in their own sandboxes. Perhaps we’ll finally get rid of the shared database and developers’ work unnecessarily impacting their teammates.

Posted in Blog | Tagged , , | Leave a comment

Off to Germany

I’ve been in the UK for almost a week at the Redgate Software offices, meeting with product groups and catching up with the coworkers. It’s been a good visit, but I’m off to SQL Konferenz today. I fly early from London to Frankfurt for the event.

I’ll be speaking tomorrow on DevOps and databases in the afternoon, so I’ll spend a little time visiting people and checking out pre-cons and a bunch of time practicing and ensuring the presentation is ready to go.

It will be a fun day in Germany and I’m looking forward to seeing if I can goad a taxi driver to get up close to 200kph on the way from Frankfurt to Darmstadt.

Posted in Blog | Tagged , | Leave a comment

New Solutions for Old Problems–T-SQL Tuesday #87

tsql2sday-300x300This is an interesting T-SQL Tuesday this month. For #87, we have an invitation from Matt Gordon. The topic is using new tools to solve old problems. The “new” cutoff is SQL Server 2014, so we’d looking at a way that the last two versions of SQL Server have helped solve an old problem.

This is the monthly blog party that picks a topic and has everyone with a blog writing on the topic. You can do that, too. Just add an entry on your blog on Tuesday, February 14. Or start a blog and join in.

Old Problems

I’ve got no shortage of those, from my current and past jobs. However, in my current job as editor of SQLServerCentral, we use SQL Server 2008. I’ve got a few problems, but they’d be solved by SQL Server 2012, so those don’t qualify. Perhaps there’s some improvement in SQL Server 2016 with AGs that might work well for us, but I haven’t really looked since the pressing items are SQL Server 2012+ ones.

However, there is an issue I’ve had in a previous job that was a problem. SQL Server 2016 provides a great solution that I wish would have been available in SQL Server 2005+.

I once worked for a financial securities company where we had multiple clients in a single database. Each of these clients managed a portfolio of their own, and we stored the data and provided an application that limited their access to sensitive data. We did this with a series of views and procedures designed to check the clientID against the logged in user. The original person designing this has limited database experience, and ended up putting the client ID in almost every table. While that worked OK, it limited flexibility and we had issues when there were two clients from the same company that needed to manage the same portfolio. They’d end up sharing a login because we couldn’t handle flexible security.

Enter SQL Server 2016 Row Level Security. This would have been a perfect solution as we could have limited the access to data based on the client login, as well as a predicate function that we wrote. Because of the flexibility of writing this function and having it follow the user around without requiring joins to the table being queried, we could have more easily implementing flexible security to rows of data without drastic alterations of our database design.

Actually, these days I wouldn’t have recommended SQL Server 2016, but rather Azure SQL Database, using small, separate databases for each client, with RLS implemented for the various employees that needed to manage separate portfolios. A simple join table referenced in our security predicate would allow us to limit access without burdening developers to build new views or checks in stored procedures that correctly enforced our security model.

I think RLS is the best security feature in SQL Server 2016, and while I wish it had been implemented in previous versions, I’m glad it’s been added to SQL Server 2016.

Posted in Blog | Tagged , | 1 Comment

Problems With Database Problems

Gitlab had a database problem recently. I’m sure you read about it. There have been commentaries from many people, including Brent Ozar and Mike Walsh. There are many ways to look at this outage and data loss (the extent of which is not known), but I’d like to stop and focus on a couple items that I think stand out: competence and care. I don’t know how we prevent problems, but I certainly think these items are worth pondering.

First, there is the question of competence. I have no idea what the skills or experience are for the GitLab staff that responded to the event. They certainly seem to understand something about replication or backup, but are they skilled enough to understand deeply about the mechanics of PostgreSQL (or their scripting) to determine where things were broken? I have no idea, and without more information I don’t question competence. The thing to be aware of, whether for this incident or your own, are the people working the problem well enough trained to deal with the issues. Perhaps most important, do they realize when they have reached the limit of their expertise? Do they know when (and are they willing to) to call in someone else or contact a support resource?

I saw a note from Brent Ozar that the GitLab job description for a database specialist doesn’t mention backups. It does say a solid understanding of the parts of the database, which should include backups. I’d hope that anyone hiring a database specialist would inquire how someone deals with backups, especially in a distributed environment. It’s great that you give database staff a chance to work on the application, tune code, build interesting solutions to help the company, but their core responsibility and focus needs to be on the database being stable, which includes DR situations.

The second item that I worry about is the care someone takes when performing a task. In this case, any of us might have been tired at 9pm. Especially if we’d spent the day working on a replication setup, which can be frustrating. Responding to a page, especially for a security incident can be stressful. Solving an issue like that, and then having performance problems crop up is disturbing. Anyone might question their actions, wondering if they had made a mistake and caused the issue. I know when multiple problems appear in a short time, many of us would struggle to decide if two issues are coincidental or correlated. I’m glad that after the mistakes, the individual responsible handed off control to others. As with any job, once you’ve made a serious mistake, you may not perform at the same level you normally do, and it’s good to step back. Kudos, once again.

The ultimate mistake, and one that many of us have made, is to run a command on the wrong server. Whether you use a GUI or command line, it’s easy to mistake db1 for db2. I’ve tried color coding for connections, separate accounts for production, even trying to get in the habit or looking at the connection string before running a command, but in the heat of the moment, nothing really works. People will make mistakes, which is why it becomes dangerous to allow any one person to respond in a production crisis. As a manager, I’ve wanted employees to take care, and use a partner to double check code before you actually execute anything.

And above all, log your actions. I have to say I’m very impressed with GitLab’s handling of the incident and their live disclosure. This is what I like to see during a war room. Lots of notes, open disclosure, and an timeline that allows us to re-examine the incident later and learn from the response. This is an area that too few companies want to spend resources on, but learning from good and bad choices helps distribute knowledge and prepare more people for the future. I’d like to see more disclosure of post-incident review from many companies, especially cloud vendors. I can understand not disclosing too much information while the crisis is underway, as I’d worry some security related information might be released, but afterwards, I think customers deserve to know just how well their vendor deals with issues.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 6.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 2 Comments

The Foundry and Data Masking

There’s a group at Redgate that investigates new ideas and products. They’re called the Foundry, and they do some cool things. One of those is work on data masking. They’ve got a whole section on the Redgate site. Check it out and see what you think.

I’ve seen some of their early work in other areas over the years, and it can be interesting to think about future ideas. Some of our products have come out of research, so I’m always looking to see what they’re up to.

I think data masking is a very useful idea, and you can check out some of what they’re thinking now. It will be interesting to see what comes out of this.

Posted in Blog | Tagged , | Leave a comment

Moving Through Five Years

I wrote the Five Year Plan in mid 2013. In it, I noted there was a prediction that IT departments wouldn’t exist in 5 years, meaning in mid 2018. That’s a year and a half away. Is that a possibility?

I don’t think so. The more I work in the technology world, the more I see a need for humans to help manage the systems and data. The systems are complex, the small details of getting a platform up and running are varied and not standardized across any two companies, and I can’t envision a complete self-service world. As easy as the Azure or AWS consoles can be, the mindset of those platforms still expects a technical person to choose options and provision systems. After all, how many of your non-technical friends understand what geo-redundancy is?

It doesn’t seem that IT departments are really shrinking. As I look through various surveys, employment statistics and predictions, it seems that most all positions in IT are still growing and hiring. The outlook for the next few years is still good and the pay is still rising overall. What does that mean for all the DevOps, self service, and BYOD vendor support that hint at less jobs for many administrators?

I suspect that there are trends at some companies, where mundane, less skilled, easy-to-automate jobs are being replaced by automation. Some companies may even eliminate certain jobs, like the Database Administrator, but they don’t really eliminate people. Those individuals that can learn to handle other work, and become more efficient still keep their jobs, albeit with different titles. Some work may get handled by systems, but much of the work just gets distributed to other staff as a part of their jobs.

I’ve seen this in software development at companies that eliminated testers. Developers and operations staff start to become responsible for different aspects of testing. Each person spends a little time testing, in addition to their other work. Everyone ends up doing a little less of what they used to to, but a little more of something new. This also usually results in a larger development staff to cover the work the testers used to do. Often this means the department remains the same size, some testers become junior developers, and we’ve moved work around. The shared responsibility might actually improve overall quality since the impact of poor code gets noticed by more people.

I think this is what will happen with many operational IT staffs. Perhaps some companies will try to eliminate the IT department, but really just move the staff to different departments, changing the reporting structure, perhaps expand some of the responsibilities of people, but they’ll likely still have the same number of “IT staff”, even if they don’t call it that.

This doesn’t mean that each of us should count on gainful employment at our organization until we retire. Most of us should constantly get better at our jobs, and learn more about technology. I would recommend you learn new skills, but constantly and regularly practice and polish your old ones. Become better at your craft, even as you might choose to grow your career in new ways.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 4 Comments

Opening the PowerShell ISE from Explorer

This is a cool productivity trip, and one that I ran into by accident. I had heard that I could launch a command prompt by typing cmd in the address. That works, and it’s cool. It even works with ConEmu, which is my default command window.

However, while I was looking for other hints, I found that Powershell_ISE also works. That will launch the Powershell_ISE editor with the current folder as the default one in the lower pane. The “open” dialog, is still a the previous location, but I’m not sure how easy that is to change.

In any case, here’s how this works. Browse to a folder in Exporer, such as a GitHub repo I have:

2017-01-06 12_13_47-Load-SQL-Saturday-Sessions

Type “powershell_ise”

2017-01-06 12_13_56-Load-SQL-Saturday-Sessions

Hit Enter, and the ISE appears. Note the path in the lower pane.

2017-01-06 12_14_18-Windows PowerShell ISE

That’s it.

Quick, easy, and if you want to get to the ISE quickly to do something, or write some code, this is one way to do from a folder.

Posted in Blog | Tagged , | 2 Comments

Why Devops? For Better Security

The ideas of DevOps are a mixture of principles, ideas, recommendations, tools, processes, attitudes, and more. There isn’t any  one way to implement a DevOps process, and plenty of people have been working in what many would consider a DevOps environment without calling it that. I really like Donovan Brown’s definition: “DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”

That sums it up nicely, but what are some of the “value” items that we can deliver to our customers? Today I want to discuss one of these: security.

The historical view of a secure system is one that gets secured, rarely changes, and every change allowed is reviewed to ensure no mistakes are made. That view fits fine in a DevOps software pipeline, well, except the rare part. Does that make a DevOps built application less secure? Let’s turn that around. Is a traditional (waterfall, agile, etc.) application more secure because of the limitations?

I’d argue it’s not. One of the issues with security is that the issues, holes, and vulnerabilities constantly change. What was secure last week might not be secure this week. In traditional applications we find one of two things. Since deployments are relatively rare, security problems often remain un-patched for long periods of time, or they are patched quickly by changes to production systems that are not well tested or evaluated. There are countless tales of changes made to production applications that end up breaking the system and must be removed. The result, a less secure system. This can be especially problematic when dependent software, for example the OpenSSL issues, is not patched because there are so many dependencies that no one is willing to change the system for fear of causing downtime.

In a mature DevOps environment, the system is better understood because the software is regularly built, testing is automated, and there are regular deployments to various downstream environments. Security patches can be incorporated and deployed quickly, enabling the ability of our automated testing process and intermediate environments to look for potential issues. With a regular branching strategy, we can even quickly suspend current development and focus on producing a patch or changing other code to ensure a successful deployment. Because we practice regular deployments, the need for un-tested, cowboy code changes in production is eliminated.

Certainly a DevOps process doesn’t preclude making mistakes. It doesn’t ensure developers or administrators won’t create vulnerabilities (intentional or accidental). DevOps doesn’t prevent mistakes. DevOps does ask us to continually learn and get feedback from our efforts. And it asks that we incorporate that feedback into our process. If we find a problem in how we write code, a test missed, or a problem in deployment, we correct that in our automated process to prevent it happening again. And since every task, every build, every deployment is logged, we can audit everyone’s actions. DevOps certainly encourages more security, though not perfect security. The goal is that a DevOps process gets us a little better security every time we learn something.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 5.3MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment