Someone I stumbled across this picture of @brento on my drive.
This might be the best one:
And this was a good time:
However, I have a collection of silly moments. Enjoy.
Someone I stumbled across this picture of @brento on my drive.
This might be the best one:
And this was a good time:
However, I have a collection of silly moments. Enjoy.
Today is my last day at work for a couple weeks. Actually, I’m not done yet. I travel to Louisville today forthe SQL Saturday tomorrow and then I return home Sunday. I’ll be traveling away on Monday, so this is really my last day in the office. I hope. It’s possible I’ll realize over the weekend I’ve forgotten something and need to tackle it remotely, but I’m hoping that won’t be the case.
While I won’t be completely unable to do anything, I am hoping that I’ll remain unwired for the two weeks. We are traveling to remote locations for some of the time, and there might not be cell service, and certainly not wi-fi. There will be some access on some days, as my son is taking an online class this summer and he has work to do, but I’m going to try and stay off email and Twitter, using my phone to just read and take/post pictures.
Getting away from work is a challenge for many of us in technology. Earlier in my career, and for many years running this site, I struggled to take time off. I’d regularly check on the site, even writing editorials at times while on holiday. I can still remember using a public terminal at a hotel years ago to connect with work. In the last few years, I’ve gotten better, and taking time off for a 6 week sabbatical really helped. I even take all my vacation most years.
That’s something that plenty of us still need to learn. I saw a post from someone recently that was on vacation and checking email. There were issues, though a backup person was supposed to handle them. I get the compulsion to check on anything for which you have responsibility. I get the habit of looking at email. I get the desire to just fix things. Those are feelings that I think many of us experience.
What I’ve learned over time is that most of the work we do isn’t that critical. Certainly our employer’s systems can affect operations, profitability, and more, but they’re not our systems. They’re the responsibility of the organization to run, which includes ensuring there are backup resources, human and otherwise, that can manage systems. For those of us that are on top of our jobs, that are the go-to person, that don’t just maintain, but actively improve systems, we need to let others make decisions, even poor ones, and bear that same burden. Ultimately that’s how others learn and get better.
I experience this as a parent, and I’m sure many of you do as well. We want our children to do better, to avoid the mistakes we’ve made in the past. We often try to actively manage how our kids live, or prevent them from making a poor decision. Across three kids and many years, I’ve learned to step back more and allow my kids to fail. It’s hard, one of the harder things in my life to not act or say something, but allow them to make a mistake. Even at work I’ve started to step back at times and hold my tongue if I’m not positive I need to say something. Instead, I try to listen more and accept that things will be less than perfect.
That’s OK. We strive to become better, to improve our systems, to increase efficiency, but it’s possible to do too much. To overspend resources on tuning, or indeed, not make things better but worse. We can overcomplicate things to the point where others can’t understand them, or perhaps worse, we encourage mistakes from the complexity we’ve introduced.
Life requires some balance, as do our careers. Work hard, improve things where you can, meet your obligations and take responsibility for your mistakes. Accept that others will make mistakes, and that you are not irreplacible while you enjoy your time away. I’m hoping I will during the next two weeks.
I’ve asked for #SQLCareer posts, and I haven’t done any. My job isn’t a data platform DBA or developer, but it is SQL related, so a glimpse of the nonsense I perform. I’m asking for more posts, so a note at the bottom.
This should have been a cool day, after all, I’m scheduled for my first (partial) presentation at Microsoft’s Inspire conference. It turned out to be not so great.
The day started with my waking up in the Tropicana in Las Vegas. I was a last minute addition to the Inspire staff, so I grabbed the nearest Hilton. I travel enough that it’s worth me to try and stick with one airline and hotel company. As I write this, I got two free nights for my wife and daughter in South Dakota this week for a volleyball camp.
However, I get up around 7 and pack up. As I’m doing so, my daughter calls. I took her car key while she was gone to get copies made. A minor crisis distracted me on Tuesday afternoon and her key is in my car. At the Denver airport.
I apologize for the hectic, chaotic nature of my life and for not putting it back. I make mistakes, and know my kids do as well. I expect them to apologize, and I should do the same.
I check emails and answer a few, then shower and pack up.
7:45am – Walk across the bridge to catch the tram from the Excaliber to the Manadalay Bay resort. It’s got to be 90F this morning already, and I find the first station is blocked off for construction. Great.
Walk to street level and start heading to the next station. I get there and don’t see an easy way up. It’s a coin flip to walking inside and looking for a way or crossing the street to the Luxor. Everything is far in Vegas, and I elect to walk. I get into the Luxor and it seems as easy to use the internal walkway to the Mandalay as it is to wait for the the tram.
7:52 – Starbucks. Priorities, yo.
8:05 – Inspire registration. I had problems on Wed, so I needed to complete this today. I start, click to pick my MS account login, and when I click the “Next” button, the internet disappears. Fortunately the event staff was helpful and brought me a laptop to complete registration.
8:30 – The Inspire schedule is a bit of a mess. I’m not sure where to go, but pick my co-speaker and he texts me directions to the room. I head over and sit down, watching a few other Microsoft partners talk about their companies and strategies.
I have 3000 steps today, but I see a couple interesting things that partners are doing. One of which might be something we try at Redgate. In any case, it’s an interesting half hour. Then I head over to an empty room to practice my talk with co-speakers, go over timing, and review my notes.
9:45 – We’re on. There are three of us talking about SQL Operations Studio, which seems to be Ops Studio to MS. I have 15 minutes at the end, so I stand around for a bit. Me on stage, just waiting around. The professional look today.
While waiting on stage, I check Slack and respond to a few messages. I do my talk, get one question, and then walk out. That’s after I’ve finished my coffee.
11:00 – The schedule seems to be just some general partner stuff to close the day. I ping a friend for lunch, but he’s down the strip at another hotel. With a 1:30 flight, I decide to give up and get a cab to the airport.
Not wanting to waste time, I forgo an Uber and get a cab. The guy talks a slightly long way around the West side of the airport, which is annoying. It’ s not my money, but at Redgate, we try to spend wisely. This feels like I should have stood around waiting for an Uber. I could have answered a few emails while waiting. I did that in the cab, but likely I didn’t save anything other than a few minutes standing and wasted $5.
In any case, I get to the airport, walk to security, get through and make my way to the terminal. I do this enough that if I don’t try to be efficient, I waste lots of time. Fortunately from taxi drop off to sitting down for lunch is about 15 minutes for me. I sit there and go through a few product items on Slack, and respond to a question on the Redgate Hub. Nothing that important, but it’s the type of busy work that will catch up to me if I don’t get things done.
Post lunch I still have about 45 minutes before boarding, so I pull out the laptop, charge my phone, and start this post. I also answer a few emails and load some Database Weekly items for the newsletter. It’s my week to get links and write an editorial, so I try to do a little of this everyday. I find it keeps me in touch with what’s happening in the database world.
One article is on Managed Instances in Azure and how they are different. It’s interesting and I learn about the Job Object, which I knew nothing about. I always seem to learn a few things every week that I didn’t know from someone. I also caught up on a couple SQL Career posts from Jen Stirrup and Brent Ozar.
The flight home is fine, but it’s cramped. I don’t do any work and just read a bit and watch part of a movie. I seem to see parts of movies here and there, but never bother to finish if they’re not that interesting.
6:30pm – I hit the gym on the way home for 30 minutes and then get my daughter’s keys made and then some chores at home.
7:45pm – One last check of things and finish this post. I realize I’ve scheduled tweets for the next three newsletters all tomorrow. Grrr, I delete them all and need to rebuild the schedules. Takes about 30 minutes.
Not a productive workday, but since I travel again Friday to SQL Saturday Louisville, I’ll let it go.
Security issues seem to be appearing more frequently, not less. I’d expect that we would be getting better at writing software, and I think many of us are. The problem is that more and more people are writing software and we still haven’t found a way to better train developers early in their careers. Perhaps the one good thing is that more and more developers are using frameworks, which create more consistent software. If issues are discovered, a patch can ensure a large swath of systems can be patched.
The bad news is that far too many development groups build systems quickly, but don’t patch them in an expedient manner. They may be afraid or just not bother.
The specifics here aren’t important, but it is a concern that more and more often we are assembling applications from pieces of software. We often use plugins on websites and other building blocks when we put together a system. In more and more cases, we will be connecting this software to our data stores. That wasn’t the case here, but often there is some data access, and since we may keep both secure and non secure data in the same database, any vulnerabilities in one building block can cause security issues in others. The weakest link in the chain saying applies here.
I wonder how many of you worry about issues with the assembly of whole pieces of software. The pieces should be more secure, or at least, more easily patched. There should be more incentive and resources to patch software used by many people, though many times vendors become hesitant to do any more than absolutely necessary.
I’m not sure if it’s better to build out of pre-written pieces of code, but I do know that security is a shared responsibility and I wish it was more of a priority for all developers. The security of our application can depend on that weakest link.
A customer had a question recently about using Data Masker for SQL Server and temporal (or system versioned) tables. I decided to make a quick demo that will show how this works.
This is a simple demo, using just a few changes, but the concepts are useful for larger systems.
In the demo database for Data Masker, there is a dbo.dm_customer table. I’m going to make this a temporal table with a few schema changes. First, I’ll add two columns that will be used to store the valid datetime ranges.
ALTER TABLE dbo.DM_CUSTOMER
ADD StartTime DATETIME2(3) GENERATED ALWAYS AS ROW START
EndTime DATETIME2(3) GENERATED ALWAYS AS ROW END
DEFAULT CONVERT(DATETIME2(3), '9999-12-31 23:59:59.999'),
PERIOD FOR SYSTEM_TIME(StartTime, EndTime);
Next, I’ll enable system versioning, and specify the history table.
ALTER TABLE dbo.DM_CUSTOMER SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE=dbo.DM_Customer_History));
At this point, let’s test. I’ll query the history table and show it’s empty.
Now, let’s make a couple changes.
UPDATE dbo.DM_CUSTOMER SET customer_firstname = 'Jillian' WHERE customer_id = 1000002
UPDATE dbo.DM_CUSTOMER SET customer_firstname = 'Sian' WHERE customer_id = 1000004
INSERT dbo.DM_CUSTOMER ( customer_id,customer_firstname,customer_lastname, customer_gender, customer_company_name,
customer_street_address, customer_region, customer_country, customer_email, customer_telephone,
customer_zipcode, credit_card_type_id, customer_credit_card_number)
( '1000', 'Steve', 'Jhones', 'M', 'Redgate', '123 My St', 'CO', 'US',
'firstname.lastname@example.org', '30333333333', '80014', '1', '12`345566')
SET customer_lastname = 'Jones' WHERE customer_id = 1000
SET customer_lastname = 'Jilly' WHERE customer_id = 1000002
SET customer_lastname = 'Jill' WHERE customer_id = 1000002
We requery, and you can see there is data in the history table (at the bottom). All of my changes are captured in a sequence of copies of the row.
There are a few more rows in the history table, but they got cut off in the image. That isn’t that important. Just note that the names match up, for the most part, between the tables.
Now the interesting part. I have a masking set that’s similar to ones I’ve used in other articles. In this case, I’m changing names in two substitution rules. However, these rules are set for the dbo.dm_customer table. What about the dbo.dm_customer_history table?
To change this, I need to do a few things. First, I need a Command rule to disable system versioning. This is the way I disconnect the tables so that I can update data.
This has to take place before I change dbo.dm_customer, otherwise I’ll insert a bunch of new data in the history table. My ordering looks like this:
Once this is complete, I now need a Table-Table sync rule to ensure data in one table matches the other. When I create the rule, I need to have a way to join the two tables together. In this case, I use the customer_id column.
Note that I’m ensuring the names and credit card numbers match here. This will wreck my history table slightly, in that all of the rows for a customer_id will have the same value, but this does ensure I don’t have sensitive information leaking.
Once I’m done, I make this a child of the other rules. This ensures that the substitutions run on the main table, then are synchronized to the history table.
You’ll notice I also have another command rule at the bottom. The text for this rule is below, and this rule enabled the temporal link again. I’m assuming the application is somehow using this. If not, you can ignore this rule and leave the tables disconnected if that’s Ok in your application.
All I need to do now is run the rules. Let’s do that.
Once the rules are complete, let’s query the tables again. I’m only looking for those rows that were changed previously.
As you can see, the dbo.dm_customer (top) table and the dbo.dm_customer_history table (bottom) are synchronized. The multiple values I had for different versions of customer 1000002 are now all the same. I’ve lost history, but if this were sensitive information, that works out. I could have used random values instead for the history table, but here I can see linkages if I’m checking for similar data.
Trying to determine how to rewrite history for audit tables is hard. Data Masker can do pretty much anything you would like, but here we need to be sure that we have an idea of what we want to do.
I’ve presented a simple version of setting all history to the current value, which will work in some cases. If not, let me know what you need to see, and I’ll build you a scenario.
Data Masker for SQL Server is an amazing tool in the SQL Provision bundle. Combined with SQL Clone, you can deploy databases with masked data for dev and test environments in seconds. Give it a try today and see what you think.
As I work with SQL Provision, I keep finding new questions and concerns from clients and customers. Recently I had someone wonder if we could determine whether or not a database on which they were working was a SQL Clone cloned copy.
You can, and it’s easy to check. When SQL Clone creates a database, it will use the base image, and ensure there is an extended property set on the database itself.
The function sys.fn_listextendedproperty() is used to return the database extended properties. We can use the DEFAULT keyword for the various parameters, like this:
SELECT objtype ,
FROM fn_listextendedproperty(DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT);
This gives me an empty result set on a non-SQL Clone database if I have no extended properties set. If I had others set, I might get some result. For a database I’ve setup with SQL Provision, I’ll get this:
For the most part, I don’t care that I’m using a clone rather than a native SQL Server database, but there could be places I do care, and certainly I want to filter out this extended property from my version control system.
SQL Provision is a great tool for rapidly giving new environments to developers without the hassles of restoring copies and using lots of storage space. If you want to give it a try, download an evaluation today.
I’m heading back to Baton Rouge for SQL Saturday #749 this August. This is one of the largest SQL Saturdays in the US, and one I really enjoy. I’ve been there 5 or 6 times, and I was honored to be selected again.
But they’re working me. The schedule is out. I’m doing three sessions, the first three of the day. You can some see me talk about:
If you’re in the area, this is a great way to get some training on SQL Server and .NET. There are lots of really good sessions from some experts, including the Power BI experts, Adam Saxton and Patrick LeBlanc.
Hope to see you near the Big Easy this August.
Whether you like the idea of the GDPR (and the new California law), I’m sure you agree that these laws will likely change our data handling in business. Even if they are weakened through legal challenges, many companies have already started to comply and change some of their practices.
I’ve written about the GDPR plenty of times this year, and I like the law. I hope the law stands strong and resists most challenges. While I’m sure there will be plenty of spurious or silly requests and complaints, I do think these laws are asking for the good data handling practices that most data professionals have advocated for years. These include not only security but also integrity. How often have many of us advocated for corrections to problematic data and been told no? How many times have we complained about security practices?
One area that I think has been neglected too long in most industries is the area of retention. Most companies I’ve worked for have retained data indefinitely, without any thought or policy. In my mind, we ought to explicitly think about how long we hold data, and remove older data that isn’t needed for our organization’s operation. I feel more strongly about this over time as we find that data beaches become more and more prevalent.
Azure has started a preview of immutable storage, essentially WORM (Write Once, Read Many) drives as an Azure container. I’ve used WORM storage, but it’s often been viewed as a way of keeping information forever. that can change with this new Azure storage, as you can set a lifecycle management period. The blobs will be removed after this time, which removes one management headache from administrators.
I could see quite a few uses for this type of storage. If it’s inexpensive enough, what about storing backups here? We could have policies set to remove files after some limited period. I’m sure there are plenty of other uses for storage the is immutable, but also contains lifecycle management options. What creative use would you have for this type of expiring WORM storage?
With the accelerating pace of development and more frequent releases, we are seeing a bunching up of SQL Server support milestones. From 2008 through 2016, we had a new version every two years. Last year we saw another version, which means 5 versions in a ten year span. With 5 years of main support and 5 of security patches + extended support, Microsoft has quite a few active branches of code.
We now see SQL Server 2008 and R2 coming to the end of support on July 8, 2019. There are plenty of options for customers, including doing nothing. After all, if you’ve been running SQL Server 2005 or even 2000, you’ve been without support for some time. That might not be a policy of your organization, particularly if you adhere to any standards or are bound by regulation. In those cases, you may need to have active support under contract.
There is a good argument to be made that you should have active support for your core systems, such as the OS or your database platform. If a bug or security issue is discovered, would you want to upgrade on a very short timeline or apply a patch to your existing version? Think about running SQL Server 2008 today and a critical security problem is discovered in October. Do you want to upgrade to SQL Server 201x this year? Or would you wish that you would have planned and executed an upgrade in June and have to apply a GDR patch in October?
I tend to view patches as much lower risk than an upgrade, mostly because the number of changes is smaller, and an upgrade reduces my options, not to mention often increases my costs. That might change over time, as I upgrade other instances and gain experience with a new version. SQL Server 2017 still feels new to me, but SQL Server 2014 and SQL Server 2016 feel older. If I had to perform an emergency upgrade to an instance, I’d be more comfortable moving from 2008 to 2016 than 2017 right now.
I wonder how many of you view the upgrades as more risky? These days the migration tools analyze for keyword issues, and many of those can be alleviated with compatibility levels. Perhaps some of you aren’t worried and will run your SQL Server 2008 instances until they die, only upgrading if forced. If so, let us know.
If you’ve got R2 instances, however, please get rid of them. The sooner we get rid of this horribly confusing version name, the better.
One late note, after this was written, Microsoft offer to add additional extended support if you migrate to Azure.
I’ve been using the Redgate Software Data Masker for SQL Server in a number of situations to help customers. This is a tool we purchased in 2017 and is a part of the SQL Provision bundle to make building safe, secure dev and test databases.
Project Figleaf is the next version of our data masking product and you can help us make it better. There’s a beta out that you can download and play with. Our development team is conducting research to better understand how we can meet your data masking needs.
As much as I like Data Masker, it’s a clunky, cumbersome tool that takes some getting used to working with. The UI is old, and at times, a bit of an impediment. It works well, and I’ve grown used to it, but we can do better, and we’re trying.
If you work with sensitive information in your production systems, don’t bring that back to dev and test. Instead, give our new Figleaf beta a try and see if this is something that might be helpful to you. Or even download Data Masker and try that.