Speaking at VS Live–Austin

I’ll be speaking at Visual Studio Live – Austin in May 2017. I was fortunate to be selected to speak again, after my first two VS Lives last year in DC and Orlando. I’m excited to go back, as the conference is a lot of fun, and Austin is a great town.

I have two sessions, Continuous Integration for Databases and A Tour of SQL Server 2016 Security Features. I’ve given similar sessions in the past, but I’m revamping the talks for this year, updating them and adding some new demos.

When I went to the conference(s) last year, I really enjoyed my time there. I had the chance to see some great SQL and Visual Studio sessions. With the upcoming release of VS 2017, and SQL Server v.Next, this is a good time to think about a spring conference.

Plus the margaritas and Tex-Mex in Austin is fantastic.

Hope to see you there if you want a learning opportunity this May. Register before Mar 17 and save some money as well.

Posted in Blog | Tagged , | Leave a comment

Human and Machine Learning

I was reading about the Microsoft Cognitive Services and their wider release in preview to more developers. There are a few of the many machine learning services that anyone can use to build more intelligence into their applications. The entire Cognitive Services include some interesting sets of APIs that allow us to build new features and even new capabilities that we might never have considered without the power of machine learning to change the behavior of systems over time.

With machine learning, we have more and more ways to analyze data, allowing our systems to actually react to the data and become better at their particular task. I think there are certainly some dangers in how these systems might be hacked, but for many uses, I think ways that these platforms can work with speech, images and more might really alter the way in which we decide to build new applications in the future. The goal, at least according to Microsoft, is to have the AI services help humans accomplish tasks, not replace them. That’s something I hope actually comes true.

I do think that we don’t quite know how these machine learning systems will grow and interact with people over time. They work by analyzing data, and altering behavior based on data, which means that we need to better understand the implications and effects of various algorithms and systems. I hope that more companies and developers spend time experimenting and working with various APIs and services to learn more about them. I’d like to see more projects and proof-of-concept systems.

Some of us will just build things that aren’t useful, or that don’t even work well. That’s OK. We need to experiment, and understand these are experiments. These aren’t guaranteed ways of producing information from data. As long as we get that, or at least understand some of the work we may do for clients will need to be thrown away, that’s fine. I know there’s pressure in many companies to be efficient and just work on things that help the organization move forward. I’m sure some companies will make bad decisions, or even abuse their AI systems. I’m also hopeful that more will realize that some experimentation is necessary if you want to find new ways to create richer, more reactive and customized systems based on the massive amounts of data we collect every day.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

The Secure Medical Data Challenge

Securing our databases and preventing the release of information our organizations have collected is an ongoing job for many of us. We patch our systems, ensure that our logins and users are granted just a few rights, encrypt data and backups, perhaps have various monitoring solutions, and certainly deal with plenty of stress at times. We work with network staff to ensure firewalls protect our systems. It may be a regular part of our job to actually argue with others that security is important for databases. If we work with regulated data, such as financial or medical information, then we may even have the struggles of compliance with auditors or even regulators.

All of our work can be for naught with a few simple mistakes that someone in our organization might make. Perhaps an employee takes a copy of production data on a laptop home and loses it. An employee might use the same password for a secure portal that they do for Facebook or some other social site, getting hacked and exposing our systems to others. We might even have an employee click on a phishing email or insert a random USB drive into their laptop and compromise entire infrastructures. As a security professional once noted, we have to win every time. The bad guys only have to win once.

It can be distressing, and even more so this week as I read this piece about medical data and Frank Abagnale, the inspiration for Catch My If You Can. In it, Mr. Abagnale states that he doesn’t think technology will ever defeat social engineering, which is distressing to me. He may be right, though I certainly hope that machine learning and other technologies, along with lots and lots of data, will find ways to catch abnormal queries and data extraction, which often are a signal that potential data loss may be under way.

What’s more distressing in the piece for me is the fact that some of this data, like the birthdays and SSNs, are stored for years. Unlike credit card data, which is more valuable right away, unchangeable data becomes more valuable over time, so it pays to keep it around. Maybe the most distressing item might be someone using your identity to get services, which then become billed to you. How can you prove that you didn’t consume the services? I think that can be difficult, especially as we use more and more digital information that doesn’t necessarily tie directly to a particular person at a point in time. This might be especially true as we store digital pictures of signatures, which often are a poor imitation of what a person’s actual handwriting looks like. I shudder to think of those being used in a court of law.

Perhaps even more disconcerting is the idea that children’s information is being taken at early ages, perhaps being sold decades later. I have no idea what to do here, or what I’d want others to do. These are going to be data problems we deal with for a long time, and many of us may end up collecting incorrect data in our organizations, thinking it’s correct. I don’t know many companies that have good processes for correcting data that’s incorrect when it’s received. Far too often we assume the data is correct, and only worry about ensuring the bits in a file are transformed correctly to the same bits in a database.

Security is an ongoing problem, with no easy solution. There is one thing I’m sure of. We, as data professionals, are going to be the ones frustrated  by many of our efforts at security being thwarted by someone we work with.

Steve Jones


Posted in Editorial | Tagged | Leave a comment

The Multilingual Programmer

At the recent SQL Konferenz in Germany, the keynote was from Michael Rhys of Microsoft. His talk was on the evolution and design of the U-SQL language. If you haven’t looked at it, U-SQL is what the Azure Data Lake (ADL) uses, and it’s designed to improve your ability to query various data sources in the ADL. If you want to know more, and begin working with U-SQL, we have a stairway you can go through.

Michael opened his talk by looking at the languages he’d learned in his career. He started with APL and moved on from there. He asked if anyone had used APL, and there were few of us. It was my second language at University, and one I didn’t enjoy. The nature of the language was un-intuitive to me, and I was glad I only suffered for a few months. If you’d like to try it, you can tryapl.

I thought this would make a fun discussion, so I wanted to ask: what languages did you learn for programming and in what order?

For me, I started with BASIC, and a little assembler with early systems. I moved to Pascal in high school, trying to develop fun games and computer assisted homework help for myself. In University, I began with LISP, which caused plenty of people to drop out of computing. I’m not sure if that was a good idea or not, but I enjoyed that. From there, I went to APL, Assembler, Fortran and C before switching away from computers for a bit. When I returned, C++ was all the rage, and I soon found jobs that paid me to write FoxPro/Clipper code, then VB, then a touch of Java before the web became popular and I worked in ASP and ASP.NET. Along the way SQL became more and more of my career, and I’m glad it did.

These days I’m trying to improve my C#, PowerShell, and Python skills, more for fun than anything else, but those are sueful as both languages are useful in data work. I haven’t done much with R, but I have fingers crossed that the sp_execute_external_script call that allows a parameter of @language=N’Python’ gets added to SQL Server before I need to learn any R. After all, most of the R libraries exist in Python, and I find the language much more intuitive.

Let us know today what your journey has been, and if you haven’t been a developer, maybe its time to learn some programming skills. After all, I think that’s important for a DBA.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 5.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 4 Comments

Upgrading SQL Server on Linux

I saw this week that there was a new CTP (v1.3) of SQL Server v.Next. I haven’t had a lot of time to work on the Linux version lately, but I thought I’d try and see how well the upgrade went.

There’s an install and upgrade page at Microsoft you can use, but on Ubuntu, things are easy. First, connect to your system and run this:

sudo apt-get update

That will download updated packages and get the system ready. you can see that I have a lot of stuff to update on this particular system.

2017-02-22 15_06_08-Ubuntu 64-bit SQL Server .210 - VMware Workstation

Once this completes, you just run

sudo apt-get install mssql-server

This will actually perform the install. That takes a minute, and in my case, I walked away, letting this run. When it finished. I tried to connect from a local machine, but got an error. So I ran this:

systemctl status mssql-server

This should give me the status, which was that things had stopped.

2017-02-22 15_07_39-Ubuntu 64-bit SQL Server .210 - VMware Workstation

OK, no problem. This starts the service.

systemctl start mssql-server

Once this completed, I could connect.

2017-02-22 15_08_26-Ubuntu 64-bit SQL Server .210 - VMware Workstation

I’ve done this a few times over the last year, but not since CTP 1.0, so I reminded myself of the process.

So far, in my testing, most everything I’ve done with the core database engine, all scripts, etc., seem to work. More and more work is being done, and I’m interested to see how this version progresses.

If you like Linux, maybe you want to give this a try.

Posted in Blog | Tagged , , | Leave a comment

The Cloud is Just a Tool

The cloud is a term that’s full of hype. We hear from various media outlets all the time: the cloud is the answer, the cloud is cheaper, the cloud is the way of the future, the cloud handles your DR, the cloud managers availability, and more. Microsoft has been pushing the message of “cloud-first” (and mobile-first), which has many SQL Server professionals confused, concerned, or  even angry. There are also plenty of professionals that dismiss the idea of cloud anything when it comes to data.

I’ve felt similar emotions, and certainly I have been skeptical of the cloud versions of databases. I remember the first cloud service, a key-value store, which seemed woefully inadequate for most purposes. Since they I’ve seen the Azure SQL Database grow, and many other products get released. Across that time, I’ve become more and more impressed with what Microsoft has done, and as Visual Studio Team Services has expended, I’ve come to really embrace and get excited by the cloud. It’s still not something I’d always recommend, but I would always start there.

Mike Walsh wrote a great blog post on the move to the cloud, which I recommend you read. The end message that I get from Mike’s thoughts are that the cloud is a tool, and it can be a tool that really enables you to solve issues without getting caught up in the details of implementing every little part of the system. That’s a mantra that I think many of us embrace, even if we don’t really realize it. How many of you deal with SQL hardware? How many of you install or configure Windows? For many of you, do you even worry about backups or do you have scripts/tools/products that just start backing up new databases? I used to do all those things, but I haven’t even seen a production database server with my own eyes in a decade, despite connecting to many.

We all move at different paces. Some of us still deal with SQL Server 2008, 2005, 2000, or even earlier versions. Some of us will need to manage those platforms for years to come, even as we may end up helping build applications on Azure SQL Database and deal with data integrity, quality, and security issues through a remote connection. I’d like to be even more hands off. Enabling TDE in Azure is clicking a button. I wish it were that simple on premise (whether really here or in an IaaS scenario), because it should be. I should be able to click a button, get prompted to confirm, pick a backup location for my cert backup, maybe give the cert a name, and it should just get completed.

The cloud really is a set of tools and services that take away some of the details and drudgery. Sometimes that’s fantastic, and it enables more rapid, more scalable deployment of resources. Sometimes it’s dangerous because the vendors haven’t really thought through the process completely. I really think that’s where we add value as professionals. We shouldn’t be doing too many tasks that can be more easily automated. We should understand what the automation does, and be able to examine it, but we should be spending our time examining problems and evaluating solutions. We should be using tools, of which the cloud is just one, to ensure our organizations become more productive and more efficient over time.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.5MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

What’s a DACPAC and a BACPAC?

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

If you’ve worked with SQL Server development and database projects, you might have heard about DACPACs. However, if you haven’t, this was a concept that didn’t seem to catch on with many companies. I’m not a fan of the format, but it works and you should be aware of what a DACPAC is and how it can be used.

The DAC part of the moniker is show for Data-tier Application.  This is the container that includes all of the object definitions for the objects that are contained inside of the DACPAC. The PAC part is just an easy way to note this is a contained in a compressed format.

In fact, the .DACPAC is a zip file. If I rename one of them, I can open is like any other zip file. Here’s one I’ve added a .zip to the end of and opened in Windows Explorer. There are a few files in here.

2017-02-21 14_30_31-PartsUnlimited.dacpac.zip

The only really important one is the model.xml, which is a model of my objects. If I look inside, it’s a cumbersome XML format, but I can easily see my Order table as a part of the file.

2017-02-21 14_25_38-Lab Setup.docx - Word

These are useful files for having a machine read the format and reproduce database objects in a live database. SQLPackage.exe will do this, as will other tools such as a the DacFX (Data-tier Application Framework).

I don’t love the format, but it is machine readable and can allow you to package and deploy database changes. There are limitations, especially between versions, and I think that it’s harder to understand than the formats that SQL Compare (From my company, Redgate Software) uses, but that’s me. I’m biased, but I don’t love DACPACs.

In any case, you can right click and “Unpack” this, or use SSMS to create and read them into a database. In the next post, I’ll show how that works.

What’s a BACPAC?

That’s easy. It’s a DACPAC with the data included.

Posted in Blog | Tagged , , | Leave a comment

Heading Back to SQL Bits

I’m honored to be selected to speak at SQL Bits 2017 this April in Telford. This is my favorite SQL Server conference, and it’s always a joy to attend. Especially the Friday night party, where everyone seems to have a lot of fun.

I’ll be presenting “Including Your Database in a DevOps CI/CD Process” where I’ll look at the ways in which database code can be included alongside application code in a CI/CD process, as well as the challenges of doing so. I’ll demo the smooth flow of changes, along with the recovery when things go wrong.

I’ll use some tools, but mostly working in Visual Studio.

Join us for a fantastic conference in the UK this spring. Register today and save (until Mar 4). Come to a training day and learn something. I might just see you there.

Posted in Blog | Tagged , , | Leave a comment

Lots of Learning at SQL Bits

This year SQLBits is returning to Telford, UK, on April 5-8, 2017. I’ll be there, presenting on Friday, and enjoying the show the rest of the time. If you haven’t ever been to the event, it’s a fantastic, fun, casual event with attendees from all over the world coming to learn, teach, and get excited about SQL Server. The event isn’t the largest SQL Server event, but it’s got the best atmosphere and doesn’t have all the hassles of some other events.

I’ve attended most of the SQL Server conferences in the world, and if I had to choose only one to go to, it would be SQL Bits. The others are good, but SQLBits is my favorite. I’ve been many times, and I’ve watched the event grow over the years. With the venue moving from year to year, it’s also a chance to experience different venues and locations in the UK.

One of the neat things about SQLBits is that there are a mix of different training on different days. The event started with one day of pre-con learning, a paid training day on Friday with more technical sessions, and a free day on Saturday. This has grown to two full training days of pre-con training, and if you’re looking for a good deal on learning a new technology, you should come spend a day on Wednesday, April 5, or Thursday, April 6 with one of the world class instructors.

I don’t get much of a chance to attend classes, but since I’ll be there, I’m hoping to sit in on a class each of these days. There are many to choose from, and fortunately I’ve seen a few so my choice isn’t as hard as yours.    Whether you want to learn HA, Power BI, T-SQL, Text Mining with R, or more, I’m sure you’ll find one or two days worth of valuable training. In fact, if you’re going to make the journey to Telford, you should spend both days in class. Whether it’s directly useful in your job right now or it’s something that interests you, I bet you could find two days of intense training beneficial.

If you make the decision to come soon, you’ll save a bit of money if you register now. The full conference registration will go up on Mar 4, so push your boss to send you today. I’ll be there, and I hope to see a few of you there as well. Be sure to say hi to me if you make the journey to SQLBits in April.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.8MB) podcast or subscribe to the feed at iTunes and Libsyn.


Posted in Editorial | Tagged | Leave a comment

DevOps Webinar Tomorrow

A quick reminder that tomorrow, Feb 21 at 12pm EST, I’ll be hosting another DevOps, Database Lifecycle Management (DLM) webinar. Together with Arneh Eskandari, we’ll show how we can each make changes to our own database, push the changes to git and reconcile merge issues.

Register now, and watch us work together to perform distributed database development.

Posted in Blog | Tagged , , , , | Leave a comment