A New Word: 1202

1202– n. the tipping point when your brain becomes so overwhelmed with tasks you need to do, you feel too guilty to put anything off until later, prioritizing every little thing at the top of the list, leaving you immobilized.

Not quite a word, but still a fun concept. Do you get so overwhelmed you prioritize everything on the list? Are you immobilized?

I don’t and I’m not. I don’t get 1202.

I go get overwhelmed, but I still can add things to my list, or lists. Things to learn, things to fix, etc. Often, however, I’m not immobilized here, even if I prioritize some things. What I might do if I’m overwhelmed is either buckle down and do one thing, or give up and do something completely different.

FWIW, this comes from the lunar descent of Apollo 11 where the 1202 alarm was the one the computer set off when there was too much data to process.

From the Dictionary of Obscure Sorrows

Posted in Blog | Tagged , | 2 Comments

The Basics of TRY CATCH Blocks–#SQLNewBlogger

I was working with a customer and discussing how to do error handling. This is a short post that looks at how you can start adding TRY.. CATCH blocks to your code.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

TRY CATCH

This is a common error handling technique in other languages. C# uses it, as does Java, while Python has TRY EXCEPT. There are other examples, but these are good habits to get into when you don’t know how code will behave or if there is something in your data or environment that could cause an issue.

In SQL, I think many of us get used to writing one statement in a query and forget to do error handling, or transactions. However, this can be a good habit as your code might grow and people might add more statements that should execute.

A classic example of code is someone writing this:

DECLARE
   @id INT = 2
, @name VARCHAR(20) = 'Voice od the DBA'
, @stat INT = 1;
BEGIN TRAN;
INSERT dbo.Customer
   (CustomerID, CustomerName, status)
VALUES
   (@id, @name, @stat);
IF @@ERROR = 0
   COMMIT;
ELSE
   ROLLBACK;

Note that this does look for an error and then decide what to do. However, we could be better, especially if we wanted to possibly add a second insert or other work. We could do this:

DECLARE
   @id INT = 2
, @name VARCHAR(20) = 'Voice od the DBA'
, @stat INT = 1;
BEGIN TRY
     BEGIN TRAN;
     INSERT dbo.Customer
     (CustomerID, CustomerName, status)
     VALUES
     (@id, @name, @stat);
     COMMIT
END TRY
BEGIN CATCH
     ROLLBACK 
END CATCH

It doesn’t look like much, but this code could easily be enhanced with a better pattern. We can capture the various error messages like this:

DECLARE
   @id INT = 2
, @name VARCHAR(20) = 'Voice od the DBA'
, @stat INT = 1;
BEGIN TRY
     BEGIN TRAN;
     INSERT dbo.Customer
     (CustomerID, CustomerName, status)
     VALUES
     (@id, @name, @stat);
     COMMIT
END TRY
BEGIN CATCH
    DECLARE @ErrorMessage NVARCHAR(4000);
    DECLARE @ErrorSeverity INT;
    DECLARE @ErrorState INT;

    SELECT 
        @ErrorMessage = ERROR_MESSAGE(),
        @ErrorSeverity = ERROR_SEVERITY(),
        @ErrorState = ERROR_STATE();

    RAISERROR (@ErrorMessage, -- Message text.
               @ErrorSeverity, -- Severity.
               @ErrorState -- State.
               );

    WHILE @@TRANCOUNT > 0
    BEGIN
        ROLLBACK TRANSACTION;
    END 
END CATCH

In this case, we have a few statements that work with the error, in this case using RAISERROR to raise this. We could also use THROW or add something else. If we had more inserts, like to a child table, we could encapsulate them all here. What’s more, if we had logging, we could log this before the rollback to another system if our logging were not transaction dependent.

Using TRY CATCH is really just structuring your code differently. Ideally, using something a snippet in SQL Prompt so your developers have an easy way to standardize error handling.

SQL New Blogger

This post took me about 15 minutes to structure and test. I looked at a few patterns, and I liked the one in this Stack Overflow answer as a good way to generically implement this structure.

You could write a similar post showing your next boss how you implement error handling, transactions, anything. Give it a try.

Posted in Blog | Tagged , , | 5 Comments

The Cloud Database Cost Analysis

There is a skill that I think DBAs and sysadmins will need to develop: cloud cost analysis. I’ve thought this was important for quite a few years, and I’ve been (unsuccessfully) lobbying for cost information to be gathered and analyzed in Redgate Monitor. Hopefully, this work will get done soon, as I see more companies asking their technical people to provide analysis and justification of the resources being billed for in the cloud.

Basecamp analyzed its costs in 2023 and decided it could save money by leaving the cloud. I’ve seen other companies decide they were saving money in the cloud. Many, however, are likely unsure of the total return they get compared to the costs of cloud computing. I have seen some posts (like this one) that try to help you get a handle on your costs, but there is often a lot of complexity in cloud costs when multiple departments have different accounts (AWS) or subscriptions (Azure) with a provider.

In many ways, I think the large cloud vendors haven’t really considered how to support large enterprises that need a lot of resources, a lot of different administrators, and a wide variety of ways to both secure those resources as well as aggregate billing. I know we have an Azure account I didn’t create, where I have very limited rights to do things, but I can access some resources. However, my group has an AWS account that appears to be sending us billing statements instead of our corporate finance group.

What can be even more difficult to understand is that applications using the cloud might want a complete picture of billing, but various different technical groups might be responsible for the different parts of the system. Data services could be managed separately from web servers or application servers, with different groups creating, configuring, and destroying them as the needs change. Billing, however, be something accountants would want to separate by function/area/application, not group. Knowing all data services for all databases cost $xx/month is different than knowing the retail website (containing web servers, databases, and networking charges) costs $yy/month.

This is far different from the days when one group was responsible for most of our computing resources. Often we had a group of IT staffers who managed hardware, ordered it, and could link costs of different machines to a specific application or department. Now we often don’t have a central group who is even aware of all the resources that the company has provisioned.

I suspect this will mean that many more technical people will not only be asked to account for the cloud resources being used, but also split out those costs in different ways, allowing people in finance departments to aggregate the different costs from the various technical groups. I don’t know who will actually track network resources, but I suspect applications will often have this data included with the servers or services that access public networks.

Controlling costs and carefully removing unnecessary or unused resources is going to be a continual problem in the cloud. I suspect that quite a few data professionals are going to be integral to helping manage this, especially for dev/test systems that are easily forgotten about over time.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Posted in Editorial | Tagged | 3 Comments

Apr 9 Webinar: Accelerating Success: De-risking and Streamlining Releases with Flyway Enterprise

UPDATE: This moved to Apr 9

Join me Tuesday, Apr 9 for a webinar, 10:00am CDT. You can register here and then come watch live with questions or get the recording.

In this webinar, we’ll look at how the Flyway suite of tools can help your team better build and manage database deployments. I love Flyway, and it does a lot of things I was doing in 2001 when DevOps wasn’t even a term and we were releasing every week.

You can do the same thing, with some of the advanced things Flyway brings to the table that you don’t have to build and maintain.

  • script generation
  • code analysis rules
  • change and drift reports
  • a comprehensive view for impact assessment

There’s more, but join us on Thursday by registering today

Tell your friends as well, pass this along, and bring your questions.

Posted in Blog | Tagged , , , , | Comments Off on Apr 9 Webinar: Accelerating Success: De-risking and Streamlining Releases with Flyway Enterprise