I want a new gadget with less features

Years ago I bought a Pebble smartwatch. I didn’t back the Kickstarter, but a few friends did and I liked the product. I wore mine for almost two years before it started to die. I can be rough on gadgets, so it’s possible I banged it and caused my own damage. I tried a Fitbit (original) briefly, but really liked having time on my wrist.

I got a Pebble 2, which I really like, but I’m losing it. I got my son a Pebble 2 as well, but recently he fell and cracked the screen on his. He’s in college, but since I’ve had the use of his motorcycle, I decided to give him mine.

That means I need a new watch. Since the Pebble watches aren’t being updated and maintained anymore (no thanks to Fitbit), I wasn’t sure I wanted to get another one. I could, as they are still for sale at various places. I decided to look at other devices, perhaps some that might have better data options.

I put a few queries out to friends, but I’m not quite sure that I have a better choice. As I think about it, there are only a few things that I really use.

  • Clock/time – Especially important as I end up speaking in different time zones
  • Vibrating alarm – This is my alarm clock and allows me to wake up easily, without disturbing my wife.
  • Waterproof – I do not want to take this off for a shower or when I swim. So, most Fitbits are out.
  • Steps – I like step tracking. It’s a nice metric that I like to watch periodically.
  • Heart rate – I don’t check this too often, but it’s a good metric that I do want to watch as I get older.

Some devices have these features (and many more), but many are also in the $200 range, which seems like a lot to me. Especially in this age of powerful, commodity hardware. The Pebble 2 did these things well, in a simple way.

I’m looking at reviews and comparing items based on recommendations, but more and more, it seems like another Pebble 2 might be the best choice. Maybe I should just buy 2, in anticipation of the future.

Posted in Blog | Tagged , | 3 Comments

Voting for Change

Microsoft does listen to us. They’ve made a number of changes in SQL Server 2016/2107 in response to community votes and requests. The main way to make these is through Connect, though keep in mind that lots of items get filed, and I don’t know how much consideration is given to suggestions for improvements and enhancements. I’m sure a significant amount of time is spent tracking down bugs and attempting to reproduce them.

The best way to work for change is to advocate your request and get others to vote on items. I’m sure there are hundreds, maybe thousands of good requests out there, and most of us don’t have time to review them. We’ll often vote only when we hear about an issue from friends or on Twitter. Even then, many of us place different priorities on issues, and might not vote for suggestions that we don’t find useful. However, attention does get votes, so let people know what you would like to see.

That being said, there are a few that I ran across and thought would be worth mentioning. One that I think is an easy choice, and should have been included a long time ago is the ability to increase the number of SQL Agent log files. Just as we can increase the engine log file count, we ought to be able to do the same thing for Agent logs. Auditing and proper security should ensure we have plenty of log files for busy systems. Plus, this should be easy to change. In line with that, why not more system health session files?

I like this one for hidden columns, which might be really handy for legacy code. What I’d really like is the string or binary data truncated error to be enhanced. I can’t, for the life of me, understand why this hasn’t gotten some sort of fix. It’s been lingering for far too long, wasting countless development hours. Maybe they could fix it with new virtual tables.

There are plenty of other issues in Connect, some of which might be very useful, quite a few of which are silly. I still think shining light on some of these and getting more votes might change the future of the product. At least, that’s what I’m optimistic about.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

The Code Coverage Report

I had someone that told me recently that they needed to find a way to get code coverage in T-SQL. Not for them, not to improve quality, but because their manager wanted a report.

OK, here it is.

First, build a nice SSRS report that displays your company logo, a header, the current date, and a pie chart. Yes, I know pie charts are bad, but they’re good for managers that want something like code coverage.

Now, for the data source of the report, here’s your query:

SELECT 90 + ((RAND() * 8)-2)

Now, you won’t always get over 90%, which is what some people want, but you’ll get over 90 eventually.

Posted in Blog | Tagged , , | 1 Comment

Multi Script Removes 25 Database Limit

I woke up to a Slack message that alerted me to a post in one of our channels. An engineer noted that SQL Multi Script was being upgraded today, releasing to all channels with v 1.3.3. The post alerted me to not the release, but the fact that the 25 database limit (for parallel execution) has been removed.

Woot!!!

That’s one I’ve been asking for across quite a few years. We don’t have a lot of Multi Script customers, but there are still quite a few and across time, I’ve had many of them ask me to remove the limit.

Some have 30, 40, 50, 100 separate servers on which they want to run a query. Some have hundreds of databases on which they need to deploy the same code. All of them have struggled a bit with the need to run Multi Script many times to reach all their databases, with no good reason as hardware has improved and many of them have 16 GB of RAM. Back in 2007, when Multi Script was first released, most people likely had 2 or 4GB, and much slower CPUs.

With my advocacy, and a few salespeople, plus some reorganization internally, we were finally able to press the engineers to dig into this code and remove the limit. This is good news for customers. It’s also good to have

If you’ve never tried SQL Multi Script, give it a go. It’s a handy tool that I think is much better than a CMS. If you own a SQL Toolbelt license, you have Multi Script. If you don’t, grab a trial and see how this might help you.

Posted in Blog | Tagged , | 2 Comments

I Will Write Bad Code

I’ll write bad code. I know it will happen. I’ll produce a bad query, incorrect logic, or the wrong data transformation being returned.

This won’t be a malicious act. It might be because of ignorance, or perhaps just a simple mistake. My code might be the result of short-sightedness or not accounting for a potential situation. I might even misinterpret poorly written specifications and place the blame on others. Maybe I’ll misread the code and won’t realize there’s a problem.

And, for sure, this code will get deployed to production.

At some point I know this will happen, to me or someone else. And for whatever reason, I need to account for that fact in my software development process. I can’t prevent mistakes, as decades of software development have proven. Despite my best efforts, code reviews, the tests implemented in QA and elsewhere, bad code is going to get deployed.

The important thing in any software development project is how you handle the mistakes. How do you move forward, and certainly, how do you apply a patch? Can you do it quick enough to minimize the impact to clients? Or must they deal with the issues, developing the workaround or missing functionality for a significant portion of time? Or will they consider abandoning your software for some other vendor?

One of the reasons I continue to advocate for a DevOps approach is that a known process can enable you to fix your mistakes in a timely manner. With a consistent approach to writing, testing, and deploying software, you can apply a patch when it is needed. Certainly the code needs to be logically fixed, but a reliable process will help ease all the overhead of getting your code to a production system.

DevOps can be implemented many ways, and if applied in name only, things won’t improve. We can’t say we’re using DevOps, or we’re coding faster, or we release every week. We need to really implement the three ways. However, if you approach your project and staff with the idea that although things are flawed, they can be improved and made better, you’ll find that you can deliver those fixes for clients in an extremely timely manner with a minimum of risk.

DevOps allows us to move faster, but that’s not the goal. The goal is that we improve things and have confidence that we can release when we want to, in a repeatable, reliable, less risky fashion.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

A Backup Change

Backups are a fundamental skill for most DBAs, and hopefully, for most technology professionals. For developers, I’d hope that most of you use some sort of version control, and that you back up your VCS database. I actually had someone ask why we needed to back up the VCS if we had the code on our machines. Certainly the local code on your system, or in your database, provides you some level of redundancy, but all the branches, all the code from other developers, you really want a real backup. For git, this is a simple file level backup.

I recently got a letter from Crashplan, who I’ve been using for a few years as a backup provider. Apparently they are exiting the home backup market, choosing to focus on businesses. I chose them since it was an economical provider, with good ratings, that let me back up multiple machines. I’ve been happy with them, tested a restores of a few files, but never needed the service. Now, I need a new solution. I keep two copies at home, but what about a fire or disaster? I want an offsite backup.

One of the things I’ve wanted with a backup solution is a hands off process. While I’ve managed to use cloud sync software and VCS reports to move most work stuff from one machine to others, there are pictures and other data that I don’t want to lose. I’ve also got computers for my wife and kids that I’d like to have backed up. The Crashplan subscription for 5 computers worked great for me.

It doesn’t seem there are a lot of providers out there for families. Most focus on businesses or the individual, which is fine. Backblaze seems like the next best choice, and at $50/yr/computer, perhaps that’s a fair price. I’ve considered using Amazon Glacier and CloudBerry software, but that feels like I’m giving myself another management job to track. Though, maybe with PoSh available cross platform, I could just build a set of scripts to let each computer notify me if there are issues. I’m still trying to decide what makes sense.

Backup is important, and it’s becoming a more cumbersome job over time. As my family generates more pictures and video, I get more concerned about backup. Especially the cost. The same problems and challenges I face as a DBA, though often with a slightly bigger budget. However, the challenge of balancing a budget with the requirements to meet some RPO is the same.

Steve Jones

 

Posted in Editorial | Tagged | Leave a comment

Data Governance Survey–Still Time

Just a reminder. Redgate is doing research into data governance, and we have a small survey we’d love to get some answers to about how you view the process inside your company.

Take the Survey and be entered to win $100 gift card.

If you want to read some thoughts on why we’re doing this, check out this post from the Foundry at Redgate.

I think this will become more important over time. It’s not just GDPR in Europe, but more and more companies are getting increasing negative press and potential fines for losing data. As much as the Equifax event angers me, I’m glad there are calls for investigations, as their should be.

No good excuse for not having strong security from all vendors, even if it slows deployment and enhancement. We can’t have data being lost to criminals this often because of misconfiguration or silly issues.

Posted in Blog | Tagged , | Leave a comment

Remove an Active Lease on a Blob in Azure

I was creating and dropping VMs in Azure, and found myself charged money for disks that I thought I’d deleted. I found later that deleting a VM doesn’t necessarily remove the disk blob. Here was one way that I removed some of those blobs.

First, go to the classic portal. For some reason, the new Portal doesn’t have the capability.

2017-06-21 21_11_07-Virtual machines - Microsoft Azure

Click Disks

2017-06-21 21_11_19-Virtual machines - Microsoft Azure

At the bottom, click Delete

2017-06-21 21_11_31-

That removed one lease, but not others. To get rid of those, I had to do more research and searching. That process is for another post.

Posted in Blog | Tagged , , | Leave a comment

The Randomness of Analog

One of the joys for much of my early life was walking through a library, looking for a book to curl up in the corner with and read. As a young boy, I would walk to a local library and read at the wooden tables before I had my own library card to check out books. Later, I moved and had a newer library with large, comfortable chairs in which to sit and read a few pages. I’ve enjoyed the same thing as an adult in various bookstores.

I didn’t often have a specific book I wanted to read, so I’d randomly walk around, looking at spines, covers, and choosing a book in a somewhat random fashion. My fellow founder at SQLServerCentral, Andy Warren, also appreciated the randomness of browsing in a library for bookstore, discovering some new author or story to enjoy. Across the years, we’ve discussed and debated whether or not there was a way to duplicate this experience with technology.

These days I tend to buy or borrow all my books electronically. The convenience and unparalleled and time is valuable, and I certainly don’t miss the days of packing 4-5 large books for a week long conference trip. However, Amazon and my local library tend to use recommendation algorithms, or popular titles as the presentation method for their sites. I have lost quite a bit of the ability to enjoy the randomness that comes from wandering and happening upon new titles. Andy feels the same way, though none of our brainstorming has produced a way to duplicate the feeling of wandering through bookshelves in an electronic fashion.

I’m not sure if there is or isn’t a way to deuplicate this electroncially, but certainly the feel isn’t the same on a screen. All too often our focus when working with data is narrowed to a limited set of choices. And often when we build applications and provide data to users, we are trying to be exacting and relevant, not random. So much of what we choose to do in software is to remove much of the randomness from our systems. Event the “browse” features are often scoped or focused in a particular topic, subject, or area.

This filtering to a particular bubble of data is one of those areas where we have tremendous power in shaping the world. The code and queries we write, the organization of our data, this will have an impact on our users, and I’m not always sure this is for the best. Perhaps overall this is more helpful, but it also serves to prevent us from viewing the forest, only seeing the trees. If you doubt this, try browsing the internet in Private mode sometimes and run searches. You might be amazed how different the Internet can look.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Automate Migrations–T-SQL Tuesday #94

tsqltuesdayIt’s T-SQL Tuesday time again, and I’m slightly pressed for time, which is sad. This is a great topic.

Rob Sewell hosts this month with his question about what we are going to automate? He’s a PowerShell advocate, so I’m not surprised here. As much as I enjoy working in various languages, PoSh becomes more and more handy to me when I need to work outside of the SQL Server platform. I’ve been trying to play with it, and I enjoy it more and more.

If you want to participate, check out the rules:

  1. Write a post on the topic below
  2. Schedule the post to go live on Tuesday, September 12th (between zero am and midnight, UTC)
  3. Include the TSQL Tuesday logo in the top of your post
  4. Link the post back to this one (it’s easier if you comment on this post and link it)
  5. Optional: Tweet a link to your post using the #tsql2sday hash tag on Twitter

Automate Things Between Instances

The first time I saw the dbatools project in action was at SQL Saturday Cambridge, where Chrissy Lemaire gave a session with Rob. I was surprised at the power and ease of the project. I was impressed, and decided to help learn more as well as promote the project. I’ve tried to blog regularly about their cmdlets as I get a chance to play with them and I’m pretty much always impressed.

The next time you need to move some object, setting, job, etc. from one instance to another, you should try this:

  1. install the dbatools module
  2. look through the command index
  3. try migrating your object(s) with PoSh.

That’s it.

Maybe you need to copy a database or login. Maybe you want to copy jobs to a new server. The dbatools module makes all of these things easy.

So give it a try. There are some great tools for migrations as well as wonderful items for common DBA tasks.

Posted in Blog | Tagged , , , | Leave a comment