Looking Back

Someone sent me this post on 40 years of programming. It’s a read that laments a few things and complains about many others. Most of the thoughts are opinions, and as such, you may or may not see validity in them. I suspect most of you, like me, see some things are mostly true and some as just false. Since I’ve been in this business for nearly 30 years, many of the comments bring back memories and thoughts about my career as well.

One of the things I do lament is the declining quality of documentation. I’ve seen the volume and detail decline over the years. I wouldn’t want to go back to paper, but I would like to see better examples and more complete examination of the ins and outs of the various functions and features. Far too often I find that there are examples, explanations, or behaviors that are missing. I see the same thing in blogs and articles, which often leap and skip steps in their race to publish.

Focus and detail on a specific topic has been forgotten by too many. Even understanding the way your code works, or the ways the dependencies work has been lost. As more and more people move to using NuGet and pulling down packages, I find too often that many developers don’t quite understand how their systems work. The divide between those that practice DevOps well and those that just release code faster continues to grow. Few learn from their efforts and produce smoother software development pipelines. Most release code faster, losing track of which versions and dependencies that are required.

I still see projects from GitHub and other sources that lack explanations of how to actually compile and setup the project. It seems far too many developers half release software, expecting others to spend time learning what works and what doesn’t. What versions were used, and might be needed again. Maybe in some places the software evolves so quickly, adopting new methodologies and technologies constantly, that developers never need to truly understand the system. They just get things working and then change them.

Maybe that’s why so many people want to rewrite and reproduce software rather than using some existing project and improving on it.

The one thing I do wish was in all languages is a standard way of handling data types and comparing them. I constantly struggle with = and == when I move away from SQL. The comparisons in PoSh make no sense, and I regularly get errors from > until I change to -gt. It seems plenty of language designers want to make their own creation and cause divergence for no good reason. I do wish SQL had implemented == for comparisons, or that they would in the future.

As I look back, some things in computer science haven’t really changed at all. Speed and scales have grown, but many concepts remain the same. However, in other ways, I think we’ve come so far, building amazing systems that are interconnected in ways that we might never have imagined a decade ago, much less 30 years ago.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.4MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | 2 Comments

Lots of Resources

I’ve got a small Azure subscription, but there are a lot of resources in my mini lab. Forty seven, to be exact right now.

2017-07-18 16_31_56-All resources - Microsoft Azure

When I look at my bill, it’s got lots of details, but not always easy to understand.I get a breakdown and burn rate.

2017-07-18 16_33_17-VS_MSDN - Microsoft Azure

I can also see details, but tracking these down to ensure I’m making good use of my credits and other charges is hard.

2017-07-18 16_33_27-Costs by resource - Microsoft Azure

I can only imagine what this looks like for a company like Redgate, and the struggles between developers, admins, and finance people trying to determine what we have, use, need, and can let go.

Posted in Blog | Tagged , , | 2 Comments

SSMS is Free

Really, Management Studio (SSMS) is free as in beer. Go download it today.

I got a note from a reader recently that was complaining about SSMS and the lack of MDI support. This individual mentioned that they would undock windows (something I never do) and if they minimzed the parent, the child windows all disappeared. I  wondered if there were still issues with SSMS, so I fired up my version, undocked some windows and played around. Things worked as I expected, and every window was independent of the others. I had no issues working, though I did find I’d forget on which monitor a particular child window would appear.

Last year (2016) we saw SSMS get released as a separate download for SQL Server. The tool has it’s own release cycle, and we saw new updates every other month. As this development team at Microsoft got up to speed with the process and began improving and changing the product, we saw some rough release cycles, but things stabilized a bit late last year. The move to the Visual Studio 2017 shell with v17.x was nice, and I’ve found the latest version to be very stable and easy to use.

After working with Enterprise Manager in my career, then moving to Management Studio and seeing the product languish over the years as SQL Server grew, I am pleased with the direction of SSMS. The changelog is quite impressive, and I expect more things to be fixed and improved in the future. The team is more responsive, and while they won’t fix everything I (or you) want, they are making progress.

The new SSMS is not tied to any version of SQL Server. You can use it with SQL Server 2008 and later (though there are OS requirements). It will work with SQL 2000+, though there may be some issues. If you can, I’d say abandon whatever SSMS version you’re using and get the latest 17.1 release. It works well, is stable, and has lots of fixes for previous issues. Plus, you won’t need to apply those old 2008/2012/2014 patches to your workstation. Just update SSMS on your schedule.

SSMS is free, and while some of us have known this for awhile, I regularly meet people still using the version that came with 2008, R2, 2012, etc. Go download the latest bits today. This is the easiest SQL Server upgrade to justify.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

Back to the Big Easy and SQL Saturday #628 in Baton Rouge

I’ve been fortunate enough to get to the Baton Rouge SQL Saturday a number of times. I think I’ve been 3 times and am heading back in a week for my fourth.

This is an amazing event, one of the largest in the country. I’ve met people from Texas to Florida to Tennessee, all of whom drive to join the crowd of 500+ developers and database professionals. Held at LSU, it’s also a beautiful location.

If you’re anywhere close, come down for a great, free day of training and inspiration. I’ll be doing two sessions:

Hopefully I’ll see a few of you there.

Posted in Blog | Tagged , , | 2 Comments

DevOps Basics–Git log

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. This is also a part of a basic series on git and how to use it.

In a few previous posts I’ve looked at getting going with git, and in this post we continue by looking at how we can get some information about the actions we’ve taken.

If we want to see what has happened in our repo, lots of clients will show a list of changes, but from the command line we use a simple “git log”. When I do this, I see the reverse chronological view of commits.

2017-07-05 11_21_52-cmd - git  log

There are a lot of options for the log command, but there are a few I use often.

Limit Entries

I often use a –n, where n is a number, to limit what’s returned. For example, I’ll use –3 to show the last 3 commits.

2017-07-05 11_27_17-cmd

I also like the –p option, which will show differences. As you can see here, I added the UserRoles.SQL file, putting in new lines.

2017-07-05 11_28_53-cmd - git  log -3 -p

At times, I like the –decorate option, which lets me know which branch was affected. This is helpful if I’m moving around on branches and I get confused. That does happen.

2017-07-05 11_32_29-cmd - git  log -4 --decorate

There are lots of search options, and I use them at times, but rarely, so I’m usually searching for the documentation to know the dates or patterns. I do look at the –committer= syntax with my name. That lets me find my changes among others.

I also like to keep things small, so using the –pretty=oneline option is handy.

2017-07-05 11_36_04-cmd

Now I can easily see what I’ve done lately.

There are lots of ways to look at history, and certainly a client makes things easier, but I’d say that you should learn the command line, just in case there’s some issue and your client doesn’t display it properly.

Last thing, when you run git log and end up with a colon prompt, you’re in the less utility (I think, been a long time since Unix). To get out just type:

q

Posted in Blog | Tagged , , | 2 Comments

The Digital Twin

The world is changing quickly, and it’s becoming incredibly personalized thanks to digital technologies. I saw a fascinating video from GE on digital twins. These are digital representations of their products, using specific data (sensor, visual, weather, settings, etc.) to build a model of a specific piece of equipment. This model runs on a platform and constantly analyzes new data to evaluate and predict the performance of the system. With equipment like jet engines, power plants, and more, a tiny increase in efficiency can translate into incredible cost savings or revenue increases.

The idea of using digital twins is taking hold in other fields, and I expect that we’ll continue to see all sorts of digital representations and models of real world items. We are even starting to see this in medicine, with personalized treatments for various diseases, including cancer. By using more data, and powerful computing capabilities, we can tailor our treatment to the individual and their particular ailment.

This personalization has an annoying side as well, after all many of us have experienced more targeted advertisements and annoying uses of our personal information, however much of that is crude, and lacking in sophistication. Perhaps if the models using data about me would be less annoying if companies didn’t try to sell me a laptop a week after I’ve purchased one, or show me sales on products when I’m looking at SQL Server articles.

I’m amazed and hopeful that our computing systems will evolve, with more talented data scientists blending their knowledge in some problem domain with powerful computing capabilities and lots of data. I expect that some of the challenges we face with our aging infrastructures and physical systems will be helped by extensive data and powerful machine learning models specific to an instance.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Getting Close to the 2017 RTM

I still can’t believe that we’ll have a new version of SQL Server this year. After speaking at so many events last year, talking about the new features of SQL Server 2016, it seems crazy that there’s a new version coming out a year later. Welcome to the new world of DevOps, fast engineering processes, and the increasing pace of software releases from vendors. We can debate the wisdom or value of this, and you might not like it, but it’s certainly the reality of today.

The first Release Candidate (RC) for SQL Server 2017 is available this week. The big change for RC1 is that we can now use Active Directory authentication on Linux, and we get SSIS on Linux. There are a few other items, but these are the big ones. I guess SSIS scale out is a big deal for some people as their data load times increase, but I’d think that is a relatively small number of people. I’d also be wary of having clustering support for my ETL workloads, all of which haven’t been designed for that environment. I would see this feature as being more important and valuable over time. For now, let the SSIS gurus develop some patterns and practices that make sense for us to follow.

There are other new features in SQL Server 2017, and I’d urge you to play with them a bit. Upgrades are always tricky to justify for me, as I’m sure they are for you. If you don’t know how the new features work, or how they might apply to your systems, how can you decide what to do?

I tend to favor sticking with what works for older systems and moving to new versions for newer systems. Your view might vary, and certainly unless you want to setup SQL Server instances on Linux, I’m not sure 2017 offers a lot over 2016. In fact, I’d accelerate any SQL 2016 instance installations I could to avoid being trapped with SQL 2017 licenses and the chance that price or licensing terms will change. If there were features that made significant advances for my current system, I’d certainly look at SQL Server 2017.

Since I tend to only move to newer versions when there is a good reason or a new install is being performed, I like the rapid release cadence. With the deployment and testing in Azure, it seems to me the quality of SQL Server keeps increasing, and the rapid releases allow new changes to come out sooner rather than later. With a version every 18 months (my guess at the new pace), I can adopt new features relatively quickly if I think they are beneficial.

That being said, SQLServerCentral still runs on SQL Server 2008. It works, and we really just would like a core database engine. I do find it strange to work on the 2008 version of T-SQL as some of the data analysis I try to do is harder to write. I’d really like to upgrade and hopefully we’ll make a good enough case to try and move to SQL 2017 late this year or next. Maybe then we can get the chance to play with some graph capabilities and add them to SQLServerCentral, comparing them to good old relational queries in real time.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

SQL Server 2017 RC1 is Here

And I’ve got it.

2017-07-17 10_33_19-SQLQuery1.sql - 192.168.1.210.master (sa (52))_ - Microsoft SQL Server Managemen

Glenn Berry posted a a link to the MSDN blog and almost immediately a note that the link was still the CTP. I decided to fire up my Ubuntu install and check. No updates there, though a few people reported Docker had the new bits.

A short while later, the Windows link was correct, and my Linux update worked.

I did re-register the repository

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

And then the rest was the same as previous updates. Quick and easy, a few command line things to type and I was updated.

Now I need to get Docker working, since I’m betting we’ll see updates and changes there faster than anywhere else.

Posted in Blog | Tagged , , | 2 Comments

PowerShell $env Variables

I was playing with containers the other day, reading a Simple Talk article on the topic, and noticed the code in PowerShell (PoSh) used the $env:xx syntax for variables. I know I’ve seen this before, but for some reason this struck me as something I knew little about.

I decided to search a bit. One of the first links was from WindowsITPro, and they had some great code: Get-ChildItem ENV:. When I run that, I get lots of information:

2017-06-14 14_59_52-powershell

There are more items, but there are some interesting ones that I think could be useful for me, especially with automation. I have a OneDrive path, my home path, the Username, and more.

I know I’ll never remember most of these, but really I just need to remember one: Get-ChildItem ENV:

Posted in Blog | Tagged , , | Leave a comment

Short Names

I started using computers a long time ago, and in the PC world, we were often limited to an 8 character name and a 3 character extension. Unix and MacOS allowed longer names, and many of us in DOS and Windows were jealous. Eventually Windows evolved, allowing long names, spaces, and really quite a bit of latitude in what you want to name a file. For example, try this and see what happens on your SQL Server:

BACKUP DATABASE Sandbox TO DISK = 'Sandbox.thisisafullbackupthatIusetostartarestore'

However, the three character extension still dominates, and many applications still use this. Microsoft has started to get away from this, as we have .docx, .xlsx, etc. Other vendors and software systems have started to expand names slightly. I was reading about the SQL Server Diagnostics Preview and noticed that the engineers will take a dump file (.dmp) as well as a mini dump (.mdmp) and a filtered dump (.hdmp).

Now I know developers are lazy, and they don’t like to type, but in these days of auto-completion and other tools, why are we limiting ourselves. Why wouldn’t we use .minidump or .filtereddump as descriptive way of identifying the file? If we are no longer bound, why not include a better extension? I can’t imagine that the filesystem for many tools would be stressed by longer names.

I’m assuming that people still feel bound to using the shortest set of letters that they think are unique, but with the growth of software applications from many, many sources, why not just be more descriptive? Would you want to see better filenames? I know I would.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment