Fitting Into RAM

RAM has always been a fairly limited resource in most of the computer systems I’ve worked with in my career. Often there is never enough RAM, and I’d always like more, often to speed up the systems. That has somewhat changed with laptops, as 16 GB really works well for me most of the time. Not that I wouldn’t take a 32GB machine, but I’m waiting for them to become more common and smaller.

This has especially been true for database servers. It seems that I’ve rarely had a database server that could fit my entire database in RAM. Even now, I have an over-provisioned server for SQLServerCentral which has plenty of spare capacity, but I’m still slightly short on RAM. The target level for SQL Server is about one GB more than I have set. Not really worth complaining about, but still I don’t have the RAM I’d like.

Last week I wrote about someone that attacked the RDBMS as old and troublesome technology. As a part of this, a method of storing all data in memory was presented. I’m not sure I think this is actually a good or practical idea for most systems, but I did wonder about the idea of data space and size. Certainly I have seen plenty of index space in databases, and certainly there is more index data than other data at times, but I suspect that’s not the case for many databases.

Regardless, I was curious if anyone has large databases that couldn’t fit into RAM these days. If you think about the largest database you have, how big is it, in terms of data size. Not allocated size, but the total data space used. Would this fit into RAM if you could get 1TB or 2TB of memory? If you can, what about index sizes, are they large? There are a few scripts in this thread if you need one.

I suspect there are certainly databases that don’t fit into RAM, and likely plenty of instances with more than 1 database that don’t have enough RAM. I still see plenty of people with less than 64GB on their servers, so that’s a battle still being fought. I certainly wouldn’t advocate an in-memory only database, likely because there are going to be other issues, but it’s still an interesting thought. Certainly my server has only 60GB allocated and the databases are well over that in aggregate.

Maybe asking for a bit more RAM on those critical servers is the way to go, especially if you think you can get the entire database into memory.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 3 Comments

Moving Objects to a New Schema

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I haven’t had the need to move an object from one schema to another in years. Really since SQL Server 2000. I wrote about deleting a user that owns a schema recently, but that’s often a first step. The next thing I might need to do is actually move objects from that schema to a new one.

I actually ran across this command when I was looking how to move the schema to a new user. There’s actually a parameter for ALTER SCHEMA that will move objects. This is the TRANSFER argument and it works like this.

I need a new schema for the object. In this case, I’ve got a table called SallyDev.Class. I want to move this to a new schema, and I’ll choose dbo for this example. I often have had developers build in their own schema and then I’ll transfer to the dbo schema, which is almost like a merge of code from one branch (SallyDev) to another (dbo).

The format of the command is: ALTER SCHEMA <newschema> TRANSFER <object>

The new schema name is just the name, with brackets if needed. Hint, if you need brackets, rename your schema, please.

The object is the qualified name of the object, with the old schema. In this case, the command I’ll use is:

ALTER SCHEMA dbo TRANSFER SallyDev.Class

Here’s my before look:

2018-09-17 19_12_02-SQLQuery1.sql - dkrSpectre_sql2017.sandbox (DKRSPECTRE_way0u (68))_ - Microsoft

When I run the code, it works:

2018-09-17 19_13_03-SQLQuery1.sql - dkrSpectre_sql2017.sandbox (DKRSPECTRE_way0u (68))_ - Microsoft

Now my object is moved. Success!

2018-09-17 19_11_37-SQLQuery1.sql - dkrSpectre_sql2017.sandbox (DKRSPECTRE_way0u (68))_ - Microsoft

SQLNewBlogger

This is a quick view of a specific skill that can be handy. I won’t use this often, but if my team worked in this flow, or we had an issue, this not only shows how to resolve a single item move, but also helps me remember the command. I hadn’t seen this before, so a quick 10 minute blog is useful.

This also gives me ideas for other blogs, like how to automate this for a number of objects.

Posted in Blog | Tagged , , | Leave a comment

The Upgrade Storm

The Perfect Storm is a movie about the confluence of three weather events that combined to create a situation the resulted in the loss of all life on a fishing boat. It’s a good movie, and certainly one that might make anyone think twice about sailing off into the North Atlantic in bad weather. I certainly wouldn’t want to be traveling on the water when any type of hurricane might present.

Glen Berry writes about a perfect storm of a different sort. The perfect storm to upgrade, with the confluence of three events he sees taking place soon. We have quite a few performance related changes coming in Windows 2019 Server, in CPU hardware, and storage technology. Those three changes could dramatically improve the performance you might see from your SQL Server licenses.

If you were planning on updating a legacy system with a new version of SQL Server, you might do well to consider some of these hardware and software improvements. I don’t know what the costs might be, but that’s certainly something you will have to consider for your environment. The argument likely would be that if you’re updating the database server, why skimp? This is a limited resource and one that can affect your entire system. Spending some money here might make a significant difference to the way in which your system performs.

This might be a perfect storm of a different sort for your budget and your accounting department as upgrades for SQL Server are likely to be pricey. The improvements could dramatically improve workloads by lowering the time users spend waiting or increasing the throughput of queries. Those might negate the cost of any hardware and software costs involved since time is a valuable resource and saving time is worth something.

The problem for many people is that if you aren’t writing good code, then you might not get the benefits you want from the upgrade. Certainly the performance improvements might be lost under poor query performance. Upgrade to take advantage of improvements if they are something you need, but please, learn to write better code, and ensure your developers are regularly looking for better ways to use your expensive SQL Server licenses by writing more efficient code.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

What is Projectsv13 in SQL Change Automation?

I got this question recently from someone that was evaluating the SQL Change Automation client in Visual Studio. After setting up a new PoC (Proof of Concept), they were confused about why there is a new entry in their list of SQL Servers.

If you’re wondering, this is what you see in Visual Studio 2017, for the SQL Server Object Explorer.

2018-09-18 20_20_30-SQLBuilds - Microsoft Visual Studio

In my case, I’ve actually got a few different instances since I’ve been working with the SQL Change Automation (SCA) for years, across a few versions.

As the instance names list, these are LocalDB instances. LocalDB is a version of SQL Server that is spun up in process, rather than a service. Other than that, it’s a real SQL Server instance that is running. In fact, if you want, you can connect to this with SSMS.

2018-09-18 20_22_31-Connect to Database Engine

If I connect with those credentials, this will appear in the SSMS Object Explorer as well, the same as any other instance.

2018-09-18 20_22_48-SQLQuery3.sql - dkrSpectre_sql2017.sandbox (DKRSPECTRE_way0u (69))_ - Microsoft

Why is this in SCA?

As an early part of the SCA product, when it was in ReadyRoll, there was a desire to not impose burdens on application developers. If you were working with a project for C# or ASP.NET and wanted to include the database, the idea was to keep things simple.

The simple way to get up and running is to use LocalDB, so no install, no worries with Express and a service running, just use LocalDB. This used to be more of a default, but now it’s just one of the options.

In my case, once I’ve setup my project to connect, the Shadow database, by default, is also right here, and I’ll see both databases under the (localdb)\Projectsv13 instance.

2018-09-18 20_27_51-SQLBuilds - Microsoft Visual Studio

All of this is configurable, so you can set things to work in whatever manner works best for you. If you typically have a development instance, either local or remote, you can easily use those instead, and you’ll never need to worry about LocalDB. In fact, if this bothers you, you can disconnect or even delete the instance.

2018-09-18 20_32_02- 

LocalDB is a lightweight way to work with SQL Server, but you don’t need to use it.

Posted in Blog | Tagged , , | 2 Comments

Rollbacks Are Normal

Do any of you think that a rollback of a version for your application is easy? Most people struggle, and when I talk DevOps and improving your process, the number one question is about rollbacks. In fact, recently I had a few people that struggled to even listen to the early parts of DevOps because all they could think about were previous failed releases and the need to roll back. They kept saying, what about rolling back. Hopefully you can stick with me a bit longer.

I’d argue that rollbacks are the process that needs DevOps more than ever, with smaller, more regular releases and practice at making changes. Whether forward or backward, we ought to be able to rev our software easily. I ran across a piece from the cloud platform blog at Google called Reliable releases and rollback – CRE life lessons. The title is an interesting one, but suspend some of your database skepticism until the end.

It’s easy to consider rolling back in the early parts of the article and say “it’s way easier to roll back your application”, and it is. Applications just stomp down new (or old) versions on top of what’s there, which is often an easy thing to do. As they say at Google, “rollbacks are normal”, which certainly seems to fit with the application paradigm.

In fact, they recommend rolling back a good release. After all, it’s much easier to practice this sort of thing when you have a working new version of software. When the release breaks your system, as mentioned in the piece, everyone’s stress level rises and the fixes often aren’t well built. Even when they work, which isn’t anywhere near all the time, there are often problems later. The idea should be to roll back and ensure everyone knows how to undo a version change. They can document the reasons for rollback and get the previous state of the application running. I hadn’t thought about this, but it makes sense. Practice in advance and be prepared. You can always re-deploy the working version.

What about databases? They have a solution, and I like it. They want the app developers to build two versions of the application. One pre-schema change, and post. That way you deploy the first version, then the schema change(s). Then you deploy the second version. If there’s an issue, you rollback to the first version and undo the database changes. This sounds hard, but once you get into the swing of building code that survives additive changes to the database, this is easy.

This doesn’t solve any destructive changes to the database, like dropping objects or manipulating data. I would suggest that drops are a completely separate release, and have a full backup (or snapshot) taken and saved for awhile. For data manipulation, save off the previous state of data, just in case you need to reload things.

Becoming better at not only delivering changes to the customer, but also removing them when issues are detected is a valuable skill, and since we’re likely to have a bad release at some point, this might be a way to even further reduce the risk of deploying database changes.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

The Trigger Roundup–T-SQL Tuesday #106

This month was my turn to host T-SQL Tuesday. I chose Trigger Headaches or Happiness as the topic, and I am glad that there have been quite a few responses.

I started the review almost immediately,and here are a few highlights. I separated these based on how I first thought of them after reading the post. If you think I’ve mischaracterized one, let me know.

In case you don’t want to add, it’s 9 to 13, so triggers are a headache.

Helpful Triggers

I have to start with Burt Wagner, who gets top billing while wearing a Hawaiian Shirt. Thanks, Burt, and interesting solution. In the case where Burt wants to use Temporal Tables in pre-SQL 2016 instances, he uses triggers to manage the history table. I think that’s a great way to use triggers, and while slightly hidden, this should overall work well. The downside is that any schema changes to the table need changes to the trigger and history table, which could be slightly lost if developers don’t realize this.

The newest evangelist at Redgate is Kendra Little, who loves triggers. Her post looks at her first experience with triggers. It was positive as a way to detect changes for downstream systems.

Shane O’Neil has written about when triggers are good, talking about the way in which he tracks when new databases appear on his instance, with an email to auditing.

Aaron Bertrand discusses INSTEAD OF triggers, and gives you the places where these are useful, or even necessary.

I’ll file this as helpful when Peter Schott writes some code to disable triggers.

Data migration is always tricky, but Jay Robinson gives a way in which triggers are helpful. I think that this is a great use, and more people ought to do this to simplify deployments and let data migrations occur over time, not all at once.

Despite his misgivings, I think Marek Masko shows that he uses triggers effectively in code.

I think Service Broken is amazing, and I wish that messaging and queueing processes were a pattern more of us used. Reid DeWolfe shows how triggers work in conjunction with Service Broker to get things done.

A simple solution from Eugene Meidinger with the cautions that we need to be very careful with server level triggers.

Headache Triggers

Rob Farley has a discussion about some of the problems you can run into with triggers. He likes them, and has written about them before, but he offers advice for how to deal and write triggers in your organization.

Dave Mason has a post on DDL triggers, which fire in response to event classes. If you’ve never used these, Dave gives you a few ideas on where they might be handy.

We get a look at the good, bad, and ugly of a trigger in an AG from Bob Pusateri.

David Fowler has a great post on nested triggers, with Service Broker in between to ensure there is no limit to the number of times the trigger fires.

Jon Shaulis has a long post that provides an overview of triggers, where they work and don’t, and includes a few ideas for replacing triggers.

Claudio Silva writes about triggers being hidden and wasting his time.

I almost can’t believe the number of triggers Allen White has seen on a table, but since I have had similar customers and employers, I’m not surprised.

Logon triggers can be headaches, and James Livingston shares one of his with us.

The great Hugo Kornelis gives us a way to use triggers effectively and not be sad. Vote for a suggestion to make them more visible in SSMS. I voted, and I agree. We need better visibility.

While Eduardo Pivaral writes about good and bad triggers, I think he feels triggers are somewhat bad overall. Certainly his “good fix” might not have been the best choice in his eyes, even though it worked.

Not realizing that a trigger needs to operate on a number of records is a common mistake, so I’m putting Brian Dudley’s post in this area.

I’d say that trigger anti-patterns fall into the headache area, and we get a few situations from Nate Johnson.

Matthew McGiffen tells about the trouble with triggers. As with many of the other posts, he finds that triggers might be worth less than we initially think.

Posted in Uncategorized | Tagged , | 1 Comment

Monitoring Costs

I’m a believer in instrumentation and metrics being regularly gathered from your systems. Across the years I’ve built and bought monitoring software to use in various jobs, sometimes in layers where one piece of software uses data from another to provide insight about the way in which applications are performing. By gathering and analyzing this data, I can determine if more or less resources ought to be allocated to meet the demand.

No matter how you choose to gather this information, there are costs associated with the effort. Early in the lifecycle of an application, many developers and DBAs might just check to determine if the a system is up. As time passes and dependency increases on the system, there are additional needs to understand more about the system, such as storage, processor usage, and more. SQL Server has certainly matured in this respect as well, with more and more DMVs being added over time to allow more instrumentation of the platform.

One of the interesting things that happens when you monitor your systems is that there are various costs to your organization, some of them hidden.  Certainly purchasing or building software has a cost, but what about the other costs involved? As I work for vendor that sells this software, I’m always amazed how customers view costs. Most understand there are hardware costs, though surprisingly, a few don’t expect that.

There are other costs, some of which are outlined in the article linked above. There are plenty of time costs, especially associated with learning and configuring the software. In fact, this can be a significant cost. Most customers fail to account for time, and regularly become frustrated as the instrumentation never seems to meet their needs. Either it hasn’t been configured to meet your needs, or you haven’t learned to use the system well. There are also some people that find the opposite, with monitoring visualizations becoming a time sink, where administrators get lost in the analysis of data and the variety of ways to examine the information available.

It’s not just administrators, as not shortage of executives have wanted real time visualizations of metrics on systems, sometimes just standing there watching the metrics rise and fall to gain confidence in how well their systems are functioning.

There is another cost that many people often forget about as well. Any system that monitors another will impose a load to gather that data. That could be significant, especially if you haven’t tuned your software. There will always be some load, and I think it’s necessary to have information, but you ought to be sure the load isn’t too significant.

It’s important we monitor our systems. I would never build that system again, but rather buy some piece of software because the process is cumbersome and there are lots of choices for software that someone has spent time building. I’d rather learn to use their application well and let it help me do my job.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment

A Summit Contest for registering by Sept 22

If you haven’t registered for the 2018 PASS Summit and plan to go, you might press to get registration done this week. Next week prices increase, but this week you can save.

passpromo

There is a promotion from PASS that will let you save $200 this week. In addition, you can beat the price increase next week, which will go up $200, so if you can register by Sept 22, you save $400 over the cost next week. Use the discount code in the graphic above.

There’s also the chance to win a daily prize from PASSStuff.com. Plus, everyone registering this week will be entered for the chance to win a complementary admission to the 2019 Summit.

If you’re going to the Summit, or have asked, press to get registration done this week. You’ll save some money and get the chance to win some fun SAWG and maybe even get next year’s summit registration for free.

Register today and good luck.

Posted in Blog | Tagged , | Leave a comment

Deleting a User that Owns a Schema

This was an interesting question I saw posted recently. Through SQL Server 2000, trying to delete a user that owned objects was a pain. We had to actually rebuild all the objects, which was problematic. With SQL Server 2005 and later, we got schemas actually added as separate entities, so that solves some of the issue.

To allow a user to be removed from a database when they own a schema, we need to move the ownership, or authorization of the schema to a new user.

Here’s a short repro of what to do.

Let’s say that I have a user in my database called SallyDev. This user is a DDL_admin and can create objects. I’ll add Sally to my database with this script:

CREATE LOGIN SallyDev WITH PASSWORD ='SomethingStrong'
GO
CREATE USER SallyDev FOR LOGIN SallyDev
GO
ALTER ROLE db_ddladmin ADD MEMBER SallyDev
GO

As such. SallyDev has run this script:

CREATE SCHEMA SallyDev
GO
CREATE TABLE SallyDev.Class
( ClassKey INT IDENTITY(1,1)
, ClassName VARCHAR(100)
, Active BIT
)

Now, SallyDev has left the team and we need to remove her user and login. When we try to remove the user, we get this error:

2018-09-15 01_02_05-SQLQuery1.sql - dkrSpectre_sql2017.sandbox (DKRSPECTRE_way0u (68))_ - Microsoft

In SQL 2000, Sally would have owned the table and we would have gotten a different message. Now, Sally owns the schema, but the schema now owns the table.

To fix this, we need a new owner for the schema. We can use any user with the ALTER AUTHRORIZATION command like this:

ALTER AUTHORIZATION ON SCHEMA::SallyDev TO JoeDev

I don’t recommend another user as we are just deferring the same problem. Instead, I’d move the schema to dbo.

ALTER AUTHORIZATION ON SCHEMA::SallyDev TO dbo

Once this is complete, we can drop the SallyDev user.

We can also verify the schema has moved to a new user.

2018-09-15 01_06_37-Schema Properties - SallyDev

Posted in Blog | Tagged , | 1 Comment

Dark Reads During a Migration

I caught a post from the Slack Engineering team titled: Re-architecting Slack’s Workspace Preferences: How to Move to an EAV Model to Support Scalability. In the post, an engineering intern describes the move from a blob table (actually a table storing JSON data) to an EAV table. EAV tables generally don’t perform well, which isn’t the same as scalability, but in the real world these two items are interrelated. I likely would have chosen a hybrid approach, using a wider table for known items, but keeping an EAV table for potential one-offs.

In any case, I don’t want to discuss EAV solutions. Instead I want to discuss schema migrations. That’s part of the focus I have at Redgate with our Compliant Database DevOps solutions, as deploying schema changes is a challenge for many customers. It’s why DBAs usually have jobs, but the traditional built-the-script-from-developer-descriptions-and-lists-of-changes doesn’t always work smoothly. This creates stress for the individuals and risk for the organizations.

One of the items in Slack’s post is about a dark mode of deployment. That’s similar to what I’d call dark launching, but the idea is the same. In this case, there’s a good description of how this is helpful. In this case, there is the migration of data from one table to another. The new table was populated as part of the deployment, but rather than just using the new or old table, the application was altered to pull data from both sources and compare them. This helps to ensure the data was moved correctly. I assume issues resulted in the old data being used.

There were a couple interesting things with this approach. First, instrumentation was used to measure the time spent pulling data from the new table, as a way of measuring performance. This also allowed the system to discover read/write bugs in the new process. If your system has any headroom and a decent workload, this is a great way of trying to ensure that a data migration worked.

If you have a new feature, you can also use this technique. Make the database changes and add application code for your feature, but don’t expose that to the user. Instead, add code that uses the feature, sending random data to the database and reading it back. In this way, you can test that your methods work and measure the load on the database. Of course, if you do this, make sure you can silently turn off the random generation if the database is negatively affected.

I’m a big fan of dark launching and measuring the impact of changes. In so many cases our deployments might be delayed for any number of reasons, so the pressure to release the feature today rather than tomorrow or next week is silly. I’d argue the ability to measure impact, even for a day, will help ensure better code quality for the user.

That’s if you are given time to fix any issues you find..

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged | Leave a comment