- Faster Versions and More Support voiceofthedba.com/2016/12/10/fas… 33 minutes ago
- RT @GlennAlanBerry: If anyone has concrete examples where the 128GB RAM limit in SQL Server Standard Edition is causing them issues, please… 4 hours ago
- RT @jakobehn: 31 spots left, hurry up to see @DonovanBrown rub some DevOps on it next week at the Swedish MS ALM & DevOps meetup! https://… 4 hours ago
- @Live360events Thanks for having me speak 20 hours ago
- @brianrandell Thanks, man. Good to see you. Hope to catch you next year at a VSLive 20 hours ago
- Going to have to tether the phone to get the Database DevOps process to send to Azure today. Tx firewalls. rd.gt/2g5V2Ol 23 hours ago
- RT @BrentOzarULTD: New Post: Vote Now to Pick the GroupBy Conference Lineup brentozar.com/archive/2016/1… https://t.co/Iwy9xoqAdU 23 hours ago
Tagsadministation administration auditing Azure backup Backup/Recovery big data blogging bloopers business Business Intelligence career career cell phone certification Cloud Computing computers conferences continuous delivery continuous integration data analysis database design databases Database Weekly Dev Connections DevOps disaster recovery DLM encryption Filetable Friday Poll hardware high availability Humor hyper-v life mcm Microsoft misc monitoring networking NoSQL PASS patching Performance powershell presentations Redgate sabbatical security software development software development speaking SQL Connections SQL Intersection SQL in the City SQLNewBlogger SQL Prompt SQL Saturday SQLServer sql server SQLServerCentral syndicated T-SQL T-SQL Tuesday testing TopTenSkills training travel tsqlt vacation version control virtualization windows work
When I started working with SQL Server, the versions came a strange paces. Three years between v4.2 and v6.0, but less than 1 before v6.5. Then two years to v7, one more to SQL Server 2000, but five until SQL Server 2005. After SQL Server 2008, we’ve moved to a 2-3 year cycle, but I can see that accelerating a bit more as SQL Server vNext looks like it might come in 2017, a year after SQL Server 2016. Since much of the code is released to Azure first and exercised there, I could see us getting on premise releases coming every 1-2 years in the future.
Overall, I like this pace. We can see changes coming to the product on a regular basis and enhancements that I can choose to implement or not. We no longer have to wait years to get an enhancement or evolution of a feature. We always have the choice to upgrade. Many of us won’t, but that’s OK. Perhaps we’ll use new features in new applications, and learn how they might make the case to, or not to, upgrade existing instances.
There’s a flip side to the rapid releases and enhancements to the platform. Many of our organizations expect a database server to last for years. We invest plenty of resources in building these database servers, and I think many managers and organizations expect they will be in service for at least five, and maybe ten, years or more before they are decommissioned or perhaps upgraded. I know there are still plenty of SQL Server 2005 and 2008 instances (probably not 2000) that are being used on a daily basis with no plans to replace them.
This means that as the developers and DBAs that work with SQL Server, we will likely be supporting a wide range of versions in the future than ever before. Even if you started a company two years ago with a SQL Server 2014 instance, I’d guess you might have a few more now, perhaps some SQL Server 2016 instances. In two years you might have 2014, 2016, and 2017 (assuming vNext becomes SQL Server 2017). In five years, perhaps you’ll have five versions to support. I know there are people in that situation today with 2014/2012/2008R2/2008/2005 instances in production.
There was an announcement this week from Microsoft. They are now offering SQL Server Premium Assurance, which will give you 6 years of support after the 10 you ca now get with Extended support. Certainly there are restrictions and costs, but I expect that there will be some large organization that think it’s worth paying this to continue to use older versions as long as possible. After all, this is a platform, which implies some level of stability.
That doesn’t solve the knowledge issue and need to keep staff around that understand the platform, but with all the education and knowledge available from places like SQLServerCentral, perhaps supporting multiple versions isn’t a bit deal.
Had a quick interview at Microsoft on Channel 9 last month, and it’s now live. Enjoy.
One of the things that people have asked to be implemented for many years is an easy way to copy databases. SSIS has the Copy Database Task, but that has been problematic over time. As a result, while easy, it’s cumbersome to take a backup of a database, copy it to a new instance, and restore it. Or use the detach/copy/attach/attach method.
dbatools gives us a nice, easy Posh command to perform this task: Copy-SqlDatabase. I made a quick test recently to see how this works. Using the –Whatif option, I tried to copy a database from one instance to another on my main computer.
My first attempt didn’t do well.
I was thinking there would be a default method, but there isn’t. You must specify using the backup/restore method or the detach/attach method. I decided to try the detach/attach method.
As you can see, this looks to detach and then reattach the database, so let’s try it. First, check my QA instance. There is no DBAAdmin database.
This is a small database with just a few procs and tables in it. When I remove the –Whatif option, it works.
I saw the green PoSh flash in the middle as the file was copied from one location to the next using the BITS method. When I looked in my SQL2016_qa instance, I saw the database and objects.
All in all, this took about two minute for me to move this database. Certainly that is about the time of a backup, but I did this with one command, and it felt much, much easier.
There was a downside for me. I need to double check this, but I didn’t get the database re-attached at the source, and I had to do that myself. The files were there, just not attached. Perhaps it’s just me, so I’ll retest when I get the chance, but it seems to be a minor bug.
I also tried the backup/restore option here. I had to provide a fileshare, which I chose to be my local D: drive. Note, this has to be provided or a popup will appear. This also has to be \\server\share, not d:
This worked, with the new database appearing on my second instance.
The backup was also removed from my drive:
Overall, this is handy, and I’ll likely use this as my default method for moving databases between instances.
When we want to know if our databases are running well, most of us look at metrics for our servers. Usually some combination of CPU, RAM, Disk IO are chosen, though network, user connections, batches and transactions can be a part of tracking performance. I would guess if I asked any of you that are responsible for performance whatmight be a good measurement for your most important database server, you’d have some idea of what CPU, RAM, or something else should be running at. I know I can easily check on SQLServerCentral.com at monitor.red-gate.com and usually expect to see CPU in the 20-25% range. If it’s substantially higher, I know something is going on.
I was listening to one of the DevOps Cafe podcasts recently where the subject was monitoring your infrastructure and how well it was performing. and one of the hosts mentioned listening to Werner Vogels from Amazon years ago. Apparently despite all the technical measurements that were available for Amazon’s IT infrastructure, the one measure that they most often used to track how well everything was working was the order rate. Given their size and experience, Amazon could look at order rate and decide if their systems were performing well, both hardware and software.
Most of us might not be as tightly focused on our business, but I’m wondering if any of you use business based metrics when examining the performance of your hardware and software. Is there a metric for your business that might help you determine if your hardware is actually performing as expected? Or if your code does what the business needs in an application?
At SQLServerCentral, we have various ways of analyzing our systems, but we have added some business based metrics. One of those is forum posts/hour. We should see a similar curve throughout the week that shows how many posts we’re getting. Across years of data, we have a fairly steady rate of posts that occur on a regular basis. If we’re not getting a steady set of posts, we know something might be wrong.
Perhaps you don’t have any business based metrics that you use, but are there any that might make sense for your industry? Let us know today. And if you think that there isn’t a good way to measure your hardware and software with business metrics, I’d be interested in your explanation as well.
The Voice of the DBA Podcast
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.
This is a quick post on checking the port for your SQL Server instance. It’s in line with Tim Ford’s request for simple, beginning blog posts this year.
How do you know what port your instance is listening on? Many of you might not know, depending on the SQLBrowser to be running. That’s not necessarily a good idea, as some security protocols insist this be disabled.
There are ways to check. First, you could use a the Configuration Manager. On my desktop, this is in the Manage tool for the computer. Once inside, I can select the Server Network Utility.
Notice that I have multiple instances here, so I need to choose one. Once I do, I see the protocols on the right. In this case, I want to look at the properties of TCP/IP, which is where I’ll get the port.
If I look at properties, I’ll start with the Protocol tab, but I want to switch to the IP Addresses tab. In here, you can see I’ll see an entry for each of the IPs my instance is listening on. I can see which ones are Active as well as the port. In my case, I have these set to dynamic ports.
This doesn’t help. If I had specific ports, I’d see them listed for each IP.
The way I tend to check the specific port is that I will go to the SQL Server Error log.
When I open this up, I see my error log entries. I want to look at the startup of my current session and scroll up slightly. At some point you will see an entry like “Server is listening on…”
In my case, you can see above that I am listening for this instance on 60087. I can verify this with a connection in SSMS.
Note that this is a named instance, and I normally connect to “.\SQL2016”. The SQLBrowser determines the port, since I send the name in, and then get a port back to where I will then connect. However, here the name doesn’t matter. I just send in the port and I can connect.
Everyone working with SQL Server should know this. It’s a simple item, like checking the printer is plugged in. Know how to check this.
And blog about it.
Interviewing someone is hard. In fact, I’m not sure many of us really know how to do it well. Looking at the past hires in most companies I’ve known, I find that all too often the success of hires in developing software is a bit of a hit and miss effort. Some hires do well, and some don’t quite perform as expected. Why is that? Wouldn’t you expect that we could adequately test someone’s skills as a developer? After all, we know what kinds of code we expect people to write. Shouldn’t there be some sort of test that you could have someone complete, even across a few hours, that would allow them to show what they can produce?
I ran across an interesting post from Ore Eini that looks at a way of interviewing people by asking them to improve code. Rather than a take home test, or having someone develop code from scratch, Ore gives them a file and some code, then asks them to make it faster. The interviewee has around an hour (mentioned in the comments), but this is a test of whether or not someone understands how to read and write code well.
Perhaps there’s a good way to do this in the SQL world as well. Can we take a loop or a complex join and have a user rewrite a query to be more efficient? Some of the changes in T-SQL in recent versions (especially 2012) can dramatically change the way you write code. Perhaps a candidate should be tested to see if they actually know how to avoid Grant’s seven sins? I bet more than a few people would want to know if candidates would remove, or at least question, the use of NOLOCK.
There are many reasons why a candidate might interview well and then not perform as expected on a day to day basis. Life changes, we have outside distractions that might affect us at work. During the workday, we may struggle to get along with co-workers. Our managers might not bring out the best, or even the good, in us. Perhaps we are asked to perform tasks that weren’t covered in an interview and are outside our area of expertise. Perhaps we just don’t try as hard after we’ve achieved our goal of getting he job.
There isn’t going to be any magic, guaranteed way of ensuring we hire people that will always perform up to their abilities. That doesn’t mean we should give up. I would really like to see us continue to try new techniques, share ideas, and most of all, continue to inspire and motivate others to learn more about their craft and constantly improve their skills.
By the way, if you’re interested in the code side of things, Ore discusses some basic improvements and then more efficiency changes. One interesting thing, moving away from Linq dramatically lowered the memory allocations and working set size. Is this a big deal? It really depends on the way in which your application is structured, but this is an optimization that might be worth doing early and often with a little developer training on how to better write queries.
The Voice of the DBA Podcast
Next week is the 2016 version of SQL in the City and you can register now to attend. It’s free and doesn’t require any travel. You can watch the event live from the comfort of your office chair.
In the many years that we’ve run SQL in the City, we’ve had events in London and various cities around the US. It’s been a lot of fun and we want to continue the tradition, but with an experiment.
This year we’re broadcasting SQL in the City from the Redgate Software offices in Cambridge, UK. We have seven hours of sessions mixed in with tips and tricks that will help you become better SQL Server developers and DBAs while showing you some new ways you might use the SQL Toolbelt to work more efficiently. Even if you’re not a Redgate Software customer, I’m sure you’ll learn some things about SQL Server.
In order to make things more convenient for everyone around the globe, we are broadcasting on two days, December 14 and 15. The content is the same both days, but the times vary. Wednesday will be an early day for my in Cambridge as we broadcast for Asia and Eastern Europe. Thursday will be a late night as the conference is scheduled to fit the Western Europe and US time zones.
I got a link recently from Microsoft on building apps easily for SQL Server. At the top of the page, they ask you to pick a language and OS. Since I’ve done a little Python and I used Windows, I chose that option.
The page looks to walk you through a simple app. I decided to try this out and see if it works, following the instructions. Since I already had SQL 2016 installed (and newly upgraded to SP1), I just connected and verified it was there.
I didn’t want to go back to Python 2.7, and I’d rebuilt this machine for testing, so I used Chocolatey to quickly get Python 3.5. I’ll have to adjust code to match this version.
One python was installed, I needed to get virtualenv and then create a folder. I used my regular git folder since it’s really, really simple to get a project with version control running. Unfortunately, when I went to get pyodbc, I had an error.
This is going to use C++ tools to build the pyodbc files, so I need to get those installed. I can do that from Visual Studio.
Once I run those and install them, I can now get pyodbc. There is a build error, but I seem to have the module. My next step is to create my db. I also create a login that is db_owner for this.
Now it’s time for the code. That’s easy enough, so I copy the code into VSCode and change the instance and user information.
I also had to replace the print statements with parenthesis around the quoted items, which is required in Python 3.x. Once that’s done, I start the program and it appears to work.
If I check from SSMS, sure enough the program has worked.
The second part of the tutorial uses the Django Web Framework to setup a simple site. Once again, I get a pyodbc error and there’s nothing that works. Fortunately I found the issue is with Python 3.5 and not having the proper binaries in the default PyPI directory. I resolved it with this link.
However, there were other errors, which I suspect are related to Python 2.7 v Pyhton 3.5. Rather than solve those, I went on to the columnstore demo. In this, you create a table with 5mm rows and then run a query against it from Python. I did that, then created the columnstore index, then ran it again. The results are below.
Note: I had to create the variable, tsql, and used this line of code:
tsql = “SELECT SUM(Price) FROM Table_with_5M_rows”
Despite some issues, which are probably my fault, this is a good introduction to how you might use Python with SQL Server. Since we now have SQL Server on Linux coming, and there are lots of Python developers, this might be a good place for some of you to learn a bit about connectivity with Python if you ever need troubleshoot that kind of setup.
I work on a few projects and have a bunch of demos setup, but I don’t work on them all every week. Being only semi-creative like many of you, I end up with some similar projects with similar names. When I wanted to determine which project was linked to which remote repository, I didn’t see an easy way.
I perused help from the command line, but didn’t see anything, so I did a quick Google search. From Stack Overflow, I found this quick command.
git remote –v
When I ran this, I easily could get the address of my remote:
Running this in each of the repos showed me the one I needed.
I’m sure plenty of experienced people might know this, or you might always use something like SourceTree or GitKraken, but if you’re working in the command line more (as I often am), you might find this handy.
Hopefully I’ll also remember this now.
More dbatools experimentation for me. This is a great set of Powershell cmdlets that solve simple, handy problems. In this post, I want to look at Copy-SqlJob, which will allow me to easily move jobs from one instance to another.
The Job Subsystem
Jobs in SQL Server are part of the SQL Agent subsystem, which means they’re a part of SQL Server, but not really. Maybe the one part of the jobs that is really complex is getting the code for the job. You either need to use an SMO interface and script out the job, or you can decode the tables in msdb. Neither of which is handy.
We do need to move jobs, though. Perhaps the most common places I need to copy jobs around is for a DR (disaster recovery) situation. Clustering will move jobs from node to node as it covers the instance, but all the other technologies (AGs, Log Shipping, Mirroring), all of these only work with databases. So items like jobs need to be manually moved.
That’s not good.
Over time, we’ll edit, add, change jobs, but I’ve never seen anyone do a great job of actually ensuring the DR systems are up to date with jobs, logins, etc. all the time. There’s too much of a manual process.
That can change with Copy-SqlJob. I can now not only copy particular jobs between an instance, I can use a –Force command to ensure the copy takes place. This means that I can easily ensure the most recent version of my job is copied over. Or I can just skip this, and if the job exists, it’s not copied.
Running the Cmdlet
It’s easy to use this cmdlet. It can use Windows Auth or SQL Auth. It’s up to you. The documentation leaves something to be desired, so you’ll need to use the PowerShell Get-Help to learn what parameters exist.
For a simple copy, I’ll move all jobs from one instance to another. Here are the jobs on my primary SQL Server 2016 instance.
These are administrative, backup jobs and index jobs from Ola Hallengren. I don’t have these on my QA instance, which isn’t good.
Suppose this is a new instance and I want to add them? I can use Copy-SqlJob, but I want to know what will happen. I’ll use the –Source and –Destination parameters and then include a –Whatif. This gives me a list of jobs that will be copied, and those that won’t be.
If I use the –Force parameter, you’ll see that all jobs are copied.
Well, not copied, but they will be.
I could limit this to specific jobs. For example, let me just copy over these jobs
- DatabaseBackup – SYSTEM_DATABASES – FULL
- DatabaseBackup – USER_DATABASES – FULL
When I run the command with the –Jobs parameter, I see just these jobs moved:
Let’s actually do that. Once I remove the “”-Whatif” and run the command, I see this:
Note, this doesn’t actually move the dependent objects. Meaning the first job uses this procedure: [dbo].[DatabaseBackup]
I’d have to make sure this is copied separately for the job to actually run and succeed. Right now it will fail with an error that an object can’t be found. However, if you have jobs that you are using in a DR situation, this ensures the job itself is moved over.
Rather than copy the syspolicy_purge_history job, I’ll exclude it. I can do that with the –Exclude parameter. Note that the two jobs I’ve already copied are shown as warnings.
All the parts of the job itself are copied. The description, the owner, the comments, the schedule, the steps. Everything is copied, which makes this perfect for ensuring that when you deploy a job, it can be deployed to all of the places you need it deployed by copying it from the source.
Of course, you’ll also want to ensure you use some method to disable the jobs on the DR instances (along with re-enabling them on failover).