A Quick Data Backup

The other day I was driving in the Tesla and clicked and Spotify application and chose a playlist. In this case, it was the “This is Stevie Wonder” playlist, which my wife and I had listened to the night before. I got a “load error” on a song.

I tried another and got the same thing. I tried a third in this playlist, thinking this was some network error, but nothing worked. I switched to a different playlist and got a song to play, but when I went back to the Stevie Wonder one, it showed all the songs, but wouldn’t play.

I finally connected my phone and opened the playlist in Spotify only to see it was empty. Somehow the Spotify generated list was deleted and the app in the Tesla didn’t update to see the issue.

That got me thinking. I don’t have a lot of playlists, but I do sometimes follow and set up system playlists that I like. In the modern world where I’m sharing this data with others, I’m dependent that someone else doesn’t break things.

I looked around to see if I could export and import playlists, or copy them, but I didn’t see anything, so I decided to make my own backup. There are really only 5 or 6 of these I follow, so I created my own new playlist.

2022-01-13 17_03_56-John Mayer - All We Ever Do Is Say Goodbye

I renamed it to the band name, in this case, The Beatles. Next, I opened the Spotify list, selected the songs, and then dragged them over.

2022-01-13 17_05_03-John Mayer - All We Ever Do Is Say Goodbye

I did this for the 5 or 6 “This is xxx” lists and then removed those Spotify ones from my list. If those get updated, I won’t know, but that’s not critical here. If I find something missing, or want to add something, I can. In fact, my “John Mayer” list now has “83”, which was missing from the Spotify list.

Another case where a quick manual task works well and likely is better than a lot of code work. Here, that would probably be a lot of research to find something that might not even exist.

Posted in Blog | Tagged , | Comments Off on A Quick Data Backup

Knowing When to Respond

I ran into this quote on the Microsoft Learn site, which I thought was a great way to think about how to administer a system: “Without a baseline, every issue encountered could be considered normal and therefore not require any additional intervention.”

When I’ve had users file tickets or complain about things not working well, I’ve found more often than not their perception has changed more than the actual performance. I’ve been called for “slow applications” only to find out that “slow” was 30 seconds and the complainer wasn’t sure how long it used to take, but today being end of the quarter, it is slow. Digging into monitoring history has shown that the query always took at least 20s and could take over 30s. My main takeaway was a little stress for users sometimes culminates in unnecessary work for operations staff.

There certainly are times when a database query takes longer than expected but is it because the system is overloaded or there’s a lot more data? When was the last time this ran and what changed? Are there more queries against the same objects than in the past? Even when there are real problems, without knowing how a system typically looks at this time, we may struggle to quickly determine where the problem lies. We may not even know how to craft a good solution without some baseline.

Maybe the best reason for me to know a baseline is for triaging and prioritizing issues. Seeing a server at 100% CPU is one thing, but if this is a daily occurrence, I might decide another issue is more important. Especially at 2 am.

Having a baseline for your systems is important. Build a system if you must, buy one if you can, but get monitoring set up for your systems. It will help you focus development efforts when changed code doesn’t work as expected. It also helps your operations staff to help them respond more efficiently to future issues.

Steve Jones

Posted in Editorial | Tagged , | Comments Off on Knowing When to Respond

Daily Coping 24 Jan 2022

I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.

Today’s tip is to be gentle with yourself when you make mistakes.

Our gate has been broken for a little over a month. I went out to work on it. I pulled the old one off, and was working to install the new one. The conduit under the driveway was clogged and I couldn’t get things through it. No wire, no string, no rebar with a sledgehammer. I had others help me, but bad weather and travel have slowed me.

It feels like I made a mistake thinking this would be easy, that I could do it myself, and that my approach was a good one. And that I wouldn’t get hurt. I lost a few days with a hurt back and sore wrist.

I had a garage door guy out for a repair recently and he said he could handle digging up the driveway with a trencher and replace the conduit. I could do the rest, or just let him do it, which is what my wife said I should do.

I felt like I made a few mistakes here and compounded them, but I have been telling myself I made a good effort and this is just a learning experience. I’m slowly learning to not beat myself up over mistakes.

Posted in Blog | Tagged , , | Comments Off on Daily Coping 24 Jan 2022

Reusing Tools for New Purposes

One of the things you learn early on in programming is that you ought to reuse code whenever possible. This often means refactoring code into functions or methods. This ensures that the code is more easily maintained and that the knowledge and work of solving the problem is reused in many places.
In database code we don’t do this too often. We certainly can reuse some code by encapsulating it in a view, but this often brings performance penalties. We often don’t want to reuse code in stored procedures and functions as embedding this into queries can cause lots of issues. In fact, it seems that much of the way databases optimize query performance isn’t that amenable to reusing code.

That being said, I ran into an interesting case recently when thinking about maintenance in the cloud. Someone was asking about SQL Agent and the lack of support for it in many PaaS database systems. I understand that, and while there are some ways to do this, they feel complex compared to using a SQL Agent on a local instance, which is usually easy to set up and readily available. One of the speakers taking questions mentioned that they user shouldn’t forget about Azure Data Factory as an automation agent.

That caught my attention as I hadn’t thought about it before. This week, there was a blog on that very topic and I read through it to see what I thought. While this isn’t as easy as SQL Agent, it does seem to be easier than Azure Automation, and likely more familiar than elastic jobs.  That’s if you already have ADF running in pipelines. In many ways, this feels like using the maintenance plans in SQL Server, though just the call a stored procedure task. Since many people use a solution like Ola’s, this is very easy to implement.

I like that this pattern reuses skills and a system that you may already be using, transferring the skills from one area (ETL) to another (administration). This might not seem like much, but limiting the tools and technology, reduces complexity and means that each person needs to know less to support your environment. I’m a fan of code re-use, outside of T-SQL), and I think reusing other technology systems, where appropriate, is a good idea..

Steve Jones

Posted in Editorial | Tagged , | Comments Off on Reusing Tools for New Purposes