Watch the State of SQL Server Monitoring tomorrow

Tomorrow Redgate is running a webinar on the State of SQL Server Monitoring with Grant, Chris Yates, Annette Allen, and Tony Maddonna. You can register now and attend on June 25, at 4pm BST/11am EDT/8am PDT.

What has Redgate’s latest survey revealed about the state of SQL Server monitoring in 2019? Each year we contract for a survey on monitoring and produce a report (similar to and in addition to the State of Database DevOps). This is the only industry report to benchmark monitoring in the SQL Server community, now in its second year.

Join our panel of SQL Server experts as they discuss the key findings of our industry-leading report, that includes insights such as:

  • Getting to grips with growing estates
  • Issues caused by migrations
  • Which tooling is being used
  • Staying on top of multiple database systems
  • What is the next big challenge facing the industry.

The report has data from over 800 participants across a range of sectors and from all around the globe. The key findings include: Estates are continuing to grow, migrations is predicted to be the biggest challenge facing database professionals this year, adoption of cloud technologies is increasing and Redgate’s SQL Monitor is the most popular third-party tool (even when we remove out customer base).

The report will be available after the 25th June. Look for a link, or attend the webinar and we’ll send you the link.

Register today.

Posted in Blog | Tagged , , , , | Leave a comment

Microsoft, Think DevOps First

This post from Melissa Coates is a good example of not thinking through your product architecture early on. The short version of the post is that Power Bi Desktop is a better place, perhaps really the only place, to author your Power BI Reports. Melissa covers a few potential issues while using the Power BI online editor, but the big item is that you cannot easily prevent conflicts and track versions there.

Power BI started as the online service, with the Power BI Desktop tool seeming like a bit of an afterthought. At least from the marketing perspective as all of the early demos and media from Microsoft was about the online report service. Since then it’s evolved to become the primary way we will build reports in SQL Server moving forward, which I think is a good move.

While I can understand the developers at Microsoft not really thinking that the service would on-premises and maybe not considering the need to provide some text format for the reports, I can’t understand why they didn’t learn from the Integration Services team and realize that binary versions of programmable items don’t make sense. We need a format that can be easily versioned, and maybe more importantly, stored in a VCS and diff’d in a way humans can understand.

When building a format for storing code, please consider the need to work in a team and version the changes made. This means any format should not be a) binary and b) difficult to decode. Separate visual elements from logical elements and ensure a text version of this can be examined by developers. You can use XML, JSON, YAML, or any other text based format, but choose something that makes sense. Even if you add your own extension, ensure that standard tools that work with code can use this.

I do know the PBIX format is a ZIP file, but zip files don’t easily integrate with a VCS. We could use hooks to extract/rebuild out files on commit/checkout, but that’s cumbersome and silly. I’d rather that the PBIX were a folder with the files inside. Users, including my Mom, can zip a folder and email that if needed. To me, that would have been a better structure from the beginning.

Microsoft is supposed to be a company providing platforms that we build upon and use in our work. The decisions for the Power BI service seem to be poorly thought through with that in mind. I’d urge them to create a baseline set of rules for future products that consider DevOps, teams, and the need to track code.

Steve Jones

Posted in Editorial | Tagged , , , | 2 Comments

SQLServerCentral–Fixing the Time Zone

One of the bugs features of the bbPress forum software is that all posts were shown in a time zone, which for us was a default of UTC. Not a bad choice, but this made for funny displays, depending on where you were. Posting in the UK showed you as having posted an hour ago after Daylight Savings Time. Posting in the US showed future times.

A mess, for sure.

It’s fixed, or at least, it’s mostly fixed if you’re a traveler. This fix rolled out while I was in Australia, and as a result, I logged in and my time zone got picked up as Sydney. That made sense, since that’s roughly where I was (Brisbane or Melbourne), but it made for some funny posts when I was back in Denver. I replied to a post yesterday and the time I was for the original post and my reply were both the next morning.

There’s an easy fix for now. You can access your profile from the upper right menu. Once it comes up, click “Edit Profile” and then find the time zone. If it’s wrong, fix it and the posts will be at the right time.

It’s something I wish we’d thought of early on, but in testing, we usually were looking for the posts in the right order, and I never considered if the actual time was the time I’d was working in. A mistake on my part, and my apologies for those in the community that were distracted, upset, or confused by this.

Posted in Blog | Tagged , | Leave a comment

Practice Those Scripts

I’ve written lots of scripts that were deployed to production. I’ve often had another set of eyes look them over, and still, we made mistakes. In fact, a recent Salesforce outage was blamed on a poorly written database script that gave users more rights than they should have gotten. There wasn’t an actual outage caused by the script, but since customers might have been able to see data and change from other customers, Salesforce took its own service down to prevent anyone from doing so.

I’m a big fan of DevOps, and certainly including the database in a DevOps process to build a better software development flow. Part of that is ensuring that you can deploy by practicing the act multiple times. In a database world, this would mean that we run a script not just on a development server, but on a QA server, on a staging server, on any other environment we can find to practice and test the deployment. At that point, we should be confident of execution on the production system without issues.

Good in theory, but sometimes you can’t easily test scripts in intermediate environments. I think changing security is a place where it can be hard to actually test things, especially if specific accounts are referenced that might not need or have access in that environment. Certainly some data changes might be easily be tested in intermediate environments, especially when these refer to configuration differences, like email or messaging systems.

In this case, I suspect the “access changes” were data changes that updated values in certain tables in the Salesforce application. In that case, why wasn’t this tested? A restore of production to a staging environment would allow developers to test their script. It’s not multiple executions on intermediate servers, but it is better than nothing.

I’m sure many of you have had the need to execute scripts to change data, alter permissions, or something else in production. Could the same thing that happened to Salesforce happen to you? What precautions do you take, or what would you recommend to prevent this type of issue. Let us know today.

Steve Jones

Listen to the podcast at Libsyn.

Posted in Editorial | Tagged , | Leave a comment