Volleyball season is approaching. Practice started last week for the team that I’m coaching this year, and I’m excited. I look forward to teaching and competing with a new group of athletes each year. I’ll also look forward to a more regular schedule and a bit less traveling for a few months.

In preparation for this season, I’ve been doing some learning, some reading and watching, trying to improve my abilities, something I’ve done for a few years. One of the books I completed recently was one on John Wooden and called Wooden: A Coaches Life. This was a look at his life, as player and coach, with some of the descriptions and principles that embodied his work as a college basketball coach.

There were interesting stories and topics in the book, but one of the core items emphasized in the book was Coach Wooden’s emphasis on the fundamentals of the game. He stressed this with his players, asking them to work on the basics and perfect them more than on any complex plays or situations. I tend to focus on the basics when I coach as well, hoping to train players to be good at their jobs, trusting them to react to new situations.

This feels like advice that is applicable to a data professional as well, especially in the era of new features and functions that continually expand on the capabilities of the Microsoft data platform. While graph structures and containers and Azure Data Factory and Big Data Clusters are amazing new technologies, there is still a need to have good, solid fundamental skills for a SQL Server system. We still expect anyone working in those areas knows how to backup a database, how to write good T-SQL, how to set security for objects, and more.

If you want to specialize, that’s great. Perhaps you love BI or HA or some other aspect of working with the SQL Server data platform. Just keep in mind that the fundamentals are important, no matter what your job. You ought to be very competent at handling any of those tasks that we would teach a junior DBA in their first year on the job. Once you know those, you can move on to more specific items. If you don’t know those, be sure you include those as part of your learning along with more niche topics.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.4MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

T-SQL Tuesday #108

tsqltuesdayIt’s that time of month, and this is a good topic as it relates to career learning. I’m a big fan of improving your career, so I like this topic. The invitation is from Mala, one of the people I look forward to seeing each year at various events.

Non SQL Server Tech

At heart, I’m something of a data person, though I dabble in various other technologies at times. This year, I made it a point to work on learning two new technologies, one of which was outside of SQL Server. Python was what I chose and I ended up spending some time on various Python courses for about 5 months. Then life and work got in the way.

I still want to spend a bit more time on Python, but I also recognize that I need a new challenge, so I’m going to pick something else for 2019. For me, this will be CosmosDB.

I think CosmosDB is a neat technology and has some really good things inside of it, but I really don’t know enough about it. I’ve had minor exposure to NoSQL structures, but not really enough to know how well I’d use them for a project.

The Plan

For 2019, or at least for the first quarter(ish), I want to port a database from SQL Server to CosmosDB and play with the differences. I have a few sample ones, but I’ve been compiling a database of some SQL Saturday data and want to use that as a test. I’ll work on moving the data to the different CosmosDB structures, likely a document structure and a graph structure, and gain some experience as to how these work.

I hope to build a simple REST website that accesses these databases, which should also let me compare the differences for data access and note where one structure might work better than the other.

I’ll set a reminder for the end of each month in 2019 (Jan-Apr) to evaluate where I am.

Posted in Blog | Tagged , | Leave a comment

Random Pix from the 2018 PASS Summit

A few memories from me. First, a beautiful late arrival view.


My first session feels a little lonely


It started to fill a bit later


A few selfies, Angela












and my view during a break after Thursday’s sessions.


Where’s my room?


It’s a wrap


After Friday, a nice walk out the convention center and outside



Until next year.

Posted in Blog | Tagged , | Leave a comment

Internal Controls

I was browsing the Internet and stumbled on a small part of a larger story that struck me. Many of you may have heard of the story of Jamal Khashoggi, the journalist for the Washington Post that was killed. I hadn’t spent much time reading about the story, and I don’t really want to discuss that topic here. The politics of the situation are not relevent here.

There’s a part of the NY Times background story that caught my eye when a quote was posted on Twitter. This is part of that quote: ” The intelligence officials told the Twitter executives that Mr. Alzabarah had grown closer to Saudi intelligence operatives, who eventually persuaded him to peer into several user accounts”. Essentially, an employee at Twitter was accused of accessing, and potentially disclosing, sensitive data about customers. This is what I want to discuss.

In my career, there are quite a few times that I’ve had to access data to solve some problem, debug an application, or produce a report. In many cases, I’ve had to maintain some confidentiality of the data, not even discussing specifics with other employees that were not supposed to view that information. To me, that’s just part of being a professional. We handle all sorts of data, some of which we should never use outside of solving an issue or producing a report.

As I thought about what was alleged here, I wonder how many social media companies have controls or auditing to determine who has accessed information. Would they be able to actually produce a report that validates some assertion that data was, or was not, accessed. I doubt many companies have these kinds of controls. Unless some Excel file or other export was on a file share, would there be evidence?

Then I thought does anyone really do a good job of producing audit records for information access? I know some government and law enforcement systems do this (and some legal software), actually tying queries and results to some individual and even a piece of work. That’s not the nature of information for most of us, though perhaps it ought to be.

Auditing data, especially for information access, could be a huge amount of data. Even keeping a record of all user access for a week in most SQL Server databases might be more data than many of us have in our database. I do think we ought to have the option, and I’d hope that we get more detailed, more capable, and more configurable methods of auditing SQL Server activity in the future (Hint, give us SQL Audit data in a csv).

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

November Data Platform Releases

This past week was the PASS Summit in Seattle. It’s an annual conference that attracts thousands of SQL Server and data platform professionals. I am lucky in that I usually get to attend, and this year was no different, although it was a short summit for me. There are some live blogs of the keynotes from Kendra for Day 1, 2, and 3 if you want to catch up.

Microsoft was there with a large presence, and as always, they delivered a keynote and had plenty of demos that showcase new changes and enhancements for the data platform. In this case, center stage is really SQL Server 2019, though there were plenty of other items shown as well. I was surprised to see the Azure Managed Instance get so many mentions. I suspect this is an easy for for many companies to transition away from an expensive local data center, or to find ways to use less staff and continue to run a SQL Server outside of their existing infrastructure. I don’t know if this is a good fit for most customers, but Microsoft certainly wants you to try it. The Business Critical edition (with business critical pricing) will release as generally available on December 1.

There were a few releases as well, which you might want to play around with in your lab. First, Azure Data Studio (ADS) got it’s November release. I’m still not certain I love the tool, but Microsoft is working hard to improve it and add features. There aren’t a lot of changes this month, but there are a few more extensions and a number of bug fixes. The paradigm for ADS is just a little off for me, and I’m not quite sure why. I find VS Code to work well for C#, Python, and PowerShell, so why is ADS off for me? Not sure, but let me know in the discussion if you like the tool.

We also have our second release of SQL Server 2019 with CTP 2.1 being announced. It’s supposed to be available Friday for download, at least as a container, but we’ll see. There aren’t a lot of changes, but there are some. What’s more impressive is Microsoft being able to release a second version a month after the first one. They hope to get to a monthly cadence, which I think is amazing for a large product like SQL Server, especially as Windows struggles with their cadence.

There is one amazing new feature, which I think will really improve SQL Server performance for many systems: Scalar UDF Inlining. It’s not a panacea, but it should dramatically improve the use of functions in many workloads. There are restrictions, and it’s only SQL 2019, but I look forward to testing a few demos to see how well things perform with this enhancement to the query processor. You should give it a try as well, testing workloads before and enabling compatibility level 150. If you see improvement, maybe there’s a good case to upgrade your instances that might be using lots of functions.

There are more announcements, especially in the BI area. We get some cool SSRS enhancements, and you might want to watch my friend, Patrick LeBlanc, demo the changes in Power BI. I love Power BI and I think this is going to be the de facto reporting tool for most organizations moving forward. Maybe it will even displace Excel for visuals.

There are a lot of moving parts in the Microsoft Data Platform right now, which may feel overwhelming to many of us. That’s fine. We don’t have to learn everything, but we can pick something that looks interesting and spend a few hours playing. You never know what you might get inspired to learn more about.

Steve Jones

Posted in Editorial | Tagged , | Leave a comment

Vote for the PASS Board of Directors

Voting is open for the PASS Board of Directors. It’s a non-event this year, with three open positions and three candidates. That’s disappointing, as I would hope to see new candidates, new blood, and some change in the organization. I’m not complaining, since I didn’t run, but I hope that more people will run in the future.

You might think there’s no reason to vote, but one of the people voting will win a free registration to the 2019 Summit. That alone is worth a few clicks and a moment of your time.

Log into your MyPASS account and you can vote. Good luck in the contest.

Posted in Blog | Tagged , | Leave a comment

Republish: Data Breach Danger

In Seattle today, delivering two talks, so you get a republish of Data Breach Danger.

Posted in Editorial | Tagged , | Leave a comment

Republish: The Cost of Switching

Still on vacation. Enjoy The Cost of Switching

Posted in Editorial | Tagged , | Leave a comment

Creating a Quick Dashboard Widget

I read Carlos Robles blog on creating an Azure Data Studio (ADS) insight widget and decided to try this for myself. I decided to try and get a list of object types and a count of each. Following instructions, here’s what I did.

First, I wrote a query that would gather the types of objects and counts from sys.objects.

2018-10-20 16_41_15-UserObjectCount.sql - Azure Data Studio

I saved this as a file under my Documents folder. Once I had this, I ran the query and in the results, I clicked on the Chart item to the right of the results.

2018-10-20 17_01_52-UserObjectCount.sql - Azure Data Studio

This displays a bar chart, which is useful at times, but that’s not what I want.

2018-10-20 17_02_02-UserObjectCount.sql - Azure Data Studio

Instead of this, I’ll expand the Chart Type drop down. This gives me a list of a number of items I can choose. For this insight, I’ll use the table type.

2018-10-20 16_41_30-UserObjectCount.sql - Azure Data Studio

Once this is done, I’ll see the results I want. There is a “Create Insight” button above the chart. I’ll click this to get the JSON code that creates this.

2018-10-20 16_41_23-UserObjectCount.sql - Azure Data Studio

A new editor tab opens with the JSON code in it. This is just one long line of code, which isn’t as helpful or easy to work with.

2018-10-20 16_50_48-Untitled-2 - Azure Data Studio

CTRL+Shift+P opens the command palette. Type “format” and you’ll see the Format Document command. Once you format the code, it will be easier to read.

2018-10-20 16_51_06-● Untitled-2 - Azure Data Studio

The next step is to open the user settings. This is also in the command palette.

2018-10-20 16_49_47-2017Sandbox_sandbox - Azure Data Studio

In my ADS, I get the settings list. I can search for dashboard and I’ll find the one I want.

2018-10-20 16_50_07-settings.json - Azure Data Studio

If I put the cursor in the upper left, I’ll get an edit item. I can pick this and I’ll get one of two items. If I’ve never customized anything, I’ll get the “copy to user settings” option. If I have, I get the “Replace in settings” item.

2018-10-20 16_50_15-settings.json - Azure Data Studio

Once I’ve done this, I want to modify some code. I don’t love the search box, so I’ll delete that json widget. Then I paste in my widget code. Be sure you put this in the right place, with a comma between the widgets braces. I changed the name from “my-widget” to “User Object Count”.

2018-10-20 17_11_36-settings.json - Azure Data Studio

Once I do this, I save it and close it. Then I double click one of my server connections and I see my widget on the dashboard.

2018-10-20 16_57_11-SimpleTalk Dev 2014_SimpleTalk_1_Dev - Azure Data Studio

Pretty easy to get a query to appear on your dashboard as a chart if you want to get some insights. Any query could be used, with the results returned when you open the server connection. You can also leave the dashboard open and refresh it when convenient.

Posted in Blog | Tagged , , | Leave a comment

Wasted Work

I’ve been re-reading some of the manufacturing and DevOps books that I’ve collected over the years. In these texts, there are a number of concepts that stand out, but one that resonates with me is the concept of waste. There’s actually a good article on LinkedIn about waste that’s worth reading as well. While it’s easy to start spinning in circles and thinking everything is waste around you, step back and keep some perspective that some waste is inevitable because the world is messy. Don’t get too caught up in waste, but try to reduce waste where you can.

One of the areas that I think contains a lot of waste in software development is the effort to build features that won’t be used. Like many of you, I have been asked for no shortage of changes to software in the past. Often those changes require both application and database changes, usually to support a new way of conducting business.

The interesting thing to me is that often I’ll work my through a queue and get changes completed, a percentage of which rarely, or even never, get used. What seemed like a good idea when it was requested, specified, and scheduled may not be a good idea weeks or months later.

In other cases the latest request is labeled as important and needing to be completed ASAP. This might displace older work, perhaps even pushing it to the lowest priority level where it may never get completed. That’s fine, as if the work isn’t important enough to be pushed by someone, perhaps it isn’t needed.

To me, this is one of the advantages of working in a DevOps style flow, with small changes being developed and released. If clients start to use the feature and need additional development, we can continue to enhance the feature. If the users don’t use it, which we know from either future requests or instrumentation (the latter is preferred), then we can put more effort into the areas that are more important to our organization.

With less waste.

We don’t work on large projects to completion, wasting work on the parts of projects that will not be used. Instead, we complete small parts and keep shifting our focus to the areas that are most important to clients.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | 1 Comment