Republish: Common Mistakes

Back across the pond, so a re-run of Common Mistakes.

Posted in Uncategorized | Leave a comment

Republish: Release Wednesday

In the UK getting ready for SQL in the City, so Release Wednesday being republished.

Posted in Editorial | Tagged | Leave a comment

Updating to SQL Change Automation

The SQL Change Automation release was early this week.  This is replacing ReadyRoll and our DLM Automation products, rolling them into a single product and is the first stage of unifying our state and migration based solutions.

When I logged into Visual Studio and checked updates this week, I found a new item:

2018-06-20 16_45_32-Extensions and Updates

I selected this for upgrade, which replaces my ReadyRoll installation. Once I did this, it downloaded and I had to exit Visual Studio. That triggers the install.

2018-06-20 16_47_12-VSIX Installer

It’s a painless process, and when complete, I see the change in my projects.

2018-06-20 17_08_14-New Project

This works the same as the old ReadyRoll system, though there have been lots of updates to the product this year. The setup is a little better, giving you some help getting started. We have the obligatory welcome screen:

2018-06-20 17_09_16-

But after this, we help you understand a bit about what to configure. The next screen lets you see there is a development and deployment target that are needed, and these are used to try and ensure you realize you’re building a flow here.

2018-06-20 17_09_22-

There’s more work to be done, and certainly more information I can provide. Look for me to build some PoC type projects here soon to help you learn to get started.

Posted in Blog | Tagged , , , | Leave a comment

Republish: State v Migrations

Back in the UK, so a republish of State v Migrations

Posted in Editorial | Tagged | Leave a comment

A Long Break

I’m in the UK today, having flown over on the weekend. This week is SQL in the City – June 2018 on Wednesday and I needed the two full days to prep, so I’m here a day early.

Not sure how much work I’ll get done this week, or if I have time to write, as I have Redgate work these two days, then I’m off Thursday to Cork, Ireland for the ISACA conference there Friday. My wife meets me and then I’ll be on holiday until next Wednesday.

Enjoy the week, tune into SQL in the City, and I’ll be back blogging next week.

Posted in Blog | Leave a comment

More Query Tuning?

This is probably a topic near and dear to the hearts of GrantBrentPaul #2 (White), and plenty other more well known speakers in the #sqlfamily community that often present on the topic of writing more efficient code. They do a fantastic job and if you get the chance to see any of these three, take it.

Recently I saw someone on Twitter ask for more query tuning sessions at SQL Saturday and larger conferences. These seem to be very popular sessions, usually well very attended. Despite this, I don’t see a lot of these sessions compared to the popularity. I sometimes wonder if this is because relatively few speakers want to tackle complex challenges? Or maybe many don’t feel confident portraying themselves as experts in this subject? Is query tuning 101, or even 201, boring and less interesting for speakers?

I don’t know, and I’ve avoided the topic myself. Part of this is to not conflict with friends, but also it’s a complex topic to try and cover. Despite that, I keep thinking that some more basic concepts would be welcome by many that attend SQL Saturday events. I expect that a there could probably be some sort of performance or tuning talk every hour at a conference and plenty of people would attend. That makes me think I ought to do a tuning session of some sort, just to help ensure this topis is covered more often.

Picking a mix of topics and levels is often a difficult task for many organizers. As this year’s Summit sessions were released, I’ve seen a number of speakers bemoan that their favorite topic has few, often just one or two, talks scheduled. I think this is somewhat inevitable as new technologies get folded into the Microsoft Data Platform. I know that many of us are excited about one thing, the item that we use most often, or that we’d like to use more. I think the addition of Python is great, but it’s a small part of the platform, it’s new, and I don’t know how many other people want to use it. The same thing could be said for containers, for Query Store, and more.

Building a schedule for a conference is about making choices and decisions that give a variety of topics, but also include some depth and detail in those areas that are popular. If you’re attending (or attended) a conference this year, the Summit, SQL Bits, a SQL Saturday, etc., what do you want to see on the program? Should there be more performance tuning sessions or do you like a wide variety of topics that let you choose what might suit you?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.9MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Building Test Data

One of the debates I’ve seen over the last few months is about test data in development environments. As I’ve been preparing for and learning more about the GDPR, it seems that many companies are concerned about holding sensitive data in their development systems. I think it’s a valid concern, and I’ve often had to deal with this issue in the past, before any regulation impacted my work.

In one of my early jobs, we stored emails from customers in a table. We also had an email feature for our application. Needless to say, we needed to test that, though we didn’t want to obviously send emails to real customers when we were testing some sort of feature. I’ve done that and usually it results in a complaint and some scolding of the development staff. As a result, I learned to ensure that anytime we restored production to our QA system, we ran a script that would either change all emails to invalid values, or reset them to something we could use in a test system. In some cases, we’d reset them all to a specific address that we could check to see if the emails actually were sent.

In talking to many people, they often don’t build test data for development systems because the data isn’t valid. What a developer thinks about, or what might be randomly generated by some utility often isn’t seen as valuable. Most developers want to see real data, perhaps because they can then better relate their work or a specific feature to the actual live system. Maybe it’s easier to see actual customers, products, accounts, etc. when working with clients or testers, but I do think that certainly in light of the GDPR and other regulation, there are risks here.

While many people want to just restore production to refresh environments, I do think it’s a poor idea to use actual sensitive data. Even if you trust your developers, there have been no shortage of attacks against development systems, loss of laptops or other files with production data that were intended for developers. We just don’t secure test and development environments like production, and that means we are making a fundamental error in how work habits.

I’ve gone through different views on this topic across the last couple decades, and now I want some real data, some not real data. What I’d really want is a bunch of random data that is close to production, mimicking the shape and skew of production, but without any sensitive data. Then what I’d like is a set of known cases of data that are the types of data that we need to ensure works in our system. Various cases of transactions and values designed to cover the functional edges that we support.

Of course building these sets isn’t’ always easy, and it’s never going to be done. As long as we write software, we will need to maintain tests and data alongside the code. I do think this is a worthwhile investment in regulated industries, and likely worth doing in all industries. The thing is, it’s not interesting or fun, and likely not to be ever be done in most organizations. I’d like to change that, and I hope you do as well.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.3MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Docker Challenges

I’ve been trying to get time to play with Docker more. I had it setup on my old laptop and was experimenting with different ways of working with containers and databases before I switched. Since then I’ve been slightly hamstrung as I’ve been a VMWare person for some time. I have demos and various machines setup, so I haven’t gotten this working on my desktop.

On the new laptop, I decided to get this moving on my last trip. I installed the Docker msi, rebooted, and …

Nothing. I saw a service, but none of the “docker …” commands would work. I’d get errors and when I tried to run Docker for Windows, nothing happened. No app, nothing in the task bar, just a big fat nada. I’d get an error about Virtualbox, and despite some bcedit and other commands, I couldn’t get this to work.

This has been a  low priority, despite the fact that I’d like to do a database container presentation and Redgate is doing some work here. On a recent trip, I had some time to play, so I sat down and started debugging. I went back and found the Windows 10 Quick Start on Microsoft’s site and went from there.

A few things on this. First, I uninstalled the old Docker for Windows and rebooted the machine. I made sure virtualization was enabled in BIOS and then went through the Docker instructions. I downloaded the new Community Edition .exe for 64 bit Windows and installed it. Another restart and I was amazed to see the Docker icon in the status bar. I made progress.

I tried a “docker run hello-world”, but got an error. Before I could start swearing, I got the Docker popup that I needed to log into the Docker Cloud. That necessitated creating a DockerID (way0utwest, of course), which was fun since I couldn’t minimize the Docker popup and I was afraid to close it. Once I registered and confirmed my email, I could run the Hello World container.

Docker is interesting and I’ll do some more work and writing about it, especially the basics. If you’ve never tried a container, it’s one of those technologies to put on your list.

Posted in Blog | Tagged , | 4 Comments

The Data Submarine

For some reason, this came to mind: under the sea, under the sea. That’s what I thought of as I was reading about Microsoft sinking a data center. There are probably jokes to be made about Microsoft and sinking, though they’re less humorous as the company’s stock price has risen quite a bit in the last few years.

The is a research project from Microsoft on future data center design. Using modular devices that can be submerged for years at a time, only having a connection for power and data back to the surface. These are designed as sealed environments, without the creature comforts needed in data centers for human technicians. The systems inside are built to live on their own, using the water for cooling the heat generated by computations.

There won’t be any repairs or replacements for failures. With the equivalent of twelve racks of servers in the system, I wonder how long they will list. I find it interesting that the experiment is designed for a year, though the device should have a timeline of 5 years. Does that mean that Microsoft expects current hardware to last for five years? Is that the new lifecycle of modern chips and storage? Or perhaps they find the lifecycle is shorter, but this is more a test of the extreme lifetime that they expect and they’ll track and chart the failures across time of components? I expect they already know some of the expected lifetime of hardware from their massive Azure data centers, which they can compare to this environment.

It’s an interesting idea, and one that might see smaller, modular data centers spread around bodies of water where there is enough movement to carry heat away. This should reduce power consumption, as less is needed for cooling. This can also reduce latency, with devices perhaps located closer to clients for heavy compute capabilities or even content delivery.

I’m not sure these will work at larger scales, as heat attracts life, with plant and animals potentially migrating to be near the submersibles. Who knows if were would be interference with operations, but I wouldn’t be surprised to find that some level of maintenance is needed. We might see the rise of a new type of job, like undersea gardener or window washer that keeps the submarines clear of encroaching biology.

Of course, I wouldn’t be surprised to see Roomba-like automated devices that put many of these humans out of work.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.4MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 2 Comments

DevOps Debugs the Demo

I had my 24 Hours of Pass session today, Database DevOps to Ensure Compliance, and my demo broke. At least, it broke for me. I kept going and talked over things, but I hate that. It would have been impressive if it had worked.

Debugging

In a live session, I might have debugged this a bit more. I’ve had things fail a few times over the years, but before I was doing the DevOps thing, I couldn’t usually figure things out until I got back to a desk. With DevOps, I’ve solved a few things with the instrumentation.

In this case, I was cognizant of time, as another session was starting after mine, without the break that usually comes. Plus since it’s online, I can’t tell if anyone really wants to know what happened.

I do, and as soon as the webinar ended, I looked at the release and realized what happened.

The Error

I had this error.

2018-06-12 16_11_47-Window

I’d made four changes in Development that were supposed to deploy through to Production. I know they do because I’d done this not 15 minutes before the webinar, so the process was working. No errors, so what was wrong?

One thing I’d done was to practice a change, which created a build. I didn’t deploy this forward, so it was essentially sitting in the VCS, but wasn’t in downstream environments. When I looked at my release, I saw this:

2018-06-12 15_55_38-Window

Note the build number above, 912. This was my previous build. As soon as I clicked to the CI process, I saw that my last build was 914.

2018-06-12 16_14_09-Window

I’d been in such a hurry to kick off the release that I clicked too soon. The build completed, but the release page hadn’t gotten an update and picked build 912. Since I hadn’t deployed this, I didn’t get the warning that I was re-deploying a build that had already gotten to the QA database.

I created a new release after the webinar, and it went. With ultimate confidence in myself, I just sent it to Production from QA.

2018-06-12 16_17_29-Window

And it worked:

2018-06-12 16_17_53-Window

Apologies for the issues. I’m not sure why Visualstudio.com was so slow here. Apparently the audio worked, so I’m guessing it was their site.

In any case, I’ll be hosting a webinar doing DevOps in a slightly different way on June 28. Register and join us if you want to see something similar but different.

Posted in Blog | Tagged , | Leave a comment