Editorial Republish: The SQL Twilight Zone

Once again, I’m not home. This time it’s vacation, well, working vacation. I’m in New Orleans to coach a volleyball team at a tournament this weekend. Hopefully I’m not too jet-lagged after my week of Denver-London-Denver-New Orleans travel.

In any case, I thought it appropriate to republish the SQL Twilight Zone. If SQL Server goes away, I’ll pick any other platform. Snowflake sounds interesting now, but Oracle, PosrgreSQL, I’m not particular. I like working with data and helping others do the same.

Posted in Editorial | Tagged | Comments Off on Editorial Republish: The SQL Twilight Zone

The Book of Redgate: What’s Great about Redgate?

“I’m sick of hearing about Red Gate.”

The first article in the book has this title, which might seem strange, but the short piece then talks about how many Redgaters, as we call ourselves, love working for the company and tell our friends how great a place this is to work.

The question it asks is why is Redgate great? It’s not the benefits, the gatherings, the fun things, the inside jokes. It’s not even the open, collaborative way or working, the no BS no politics attitude. It’s not anything that’s easy to put into words.

It’s really the culture, which is hard to describe. It’s like a family, which is similar to what I felt at J. D. Edwards as well. We have good and bad, we have disagreement and arguments, but overall we’re all in this together.

We’ve grown since then, and it’s a different place, but it’s still a great place to work and one that I hope I stay with until I retire.

I have a copy of the Book of Redgate from 2010. This was a book we produced internally about the company after 10 years in existence. At that time, I’d been there for about 3 years, and it was interesting to learn a some things about the company. This series of posts looks back at the Book of Redgate 15 years later.

Posted in Blog | Tagged , , | Comments Off on The Book of Redgate: What’s Great about Redgate?

Tally Table Alternatives: #SQLNewBlogger

We published an article recently at SQL Server Central on Tally Tables in Fabric from John Miner. In it he showed how this can be efficient. A day after he published it, he sent me an addendum to note that GENERATE_SERIES was available in Fabric and that could be used.

I ran a few tests last week, but as I read the comments on John’s article, I realized that there were 3 ways of setting up these tally tables that I’ve used and thought I’d summarize them a bit in this post. There’s a fourth way, but I haven’t used it.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Method 1 Using System Tables

The first method, which I saw Jeff Moden use many years ago involves reading from system tables. The code typically looks like this:

SELECT ROW_NUMBER () OVER (ORDER BY
                              (SELECT NULL))
FROM
   sys.all_columns ac1
   CROSS JOIN sys.all_columns ac2;

Since this table has 12000+ rows in it, the cross join is 12k * 12 k, which is a lot. The row_number() function gives you sequential numbers in a list.

This code works, but I can never remember which table and it does read from disk (or memory) to get the values. I suspect it’s slightly slower in lots of code than the other methods, but perhaps not enough to go and refactor old code.

Method 2 Using CTEs

The method I’ve liked to use is with CTEs. I have a SQL Prompt snippet set up with tt to give me this code.

WITH myTally(n)
AS
(SELECT n = ROW_NUMBER() OVER (ORDER BY (SELECT null))
  FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10)) a(n)
   CROSS JOIN (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10)) b(n)
)
SELECT n
FROM myTally

This CTE has two 10 row “tables” that give me 100 rows (10*10). If I needed more, I can copy/paste the “cross join” line and change the b to a c and I’ve got 1000 rows. Repeat that until you don’t need more rows.

This is simple code, it’s in a snippet for me, and easy to expand. I’m not reading from anything and I can set the size as small or large as needed.

Method 3 Using GENERATE_SERIES

The last method is just a select from the GENERATE_SERIES() function. I can give it the number of rows, so this gives me 100 rows.

SELECT value FROM GENERATE_SERIES(1, 100, 1) AS gs;

I haven’t used this because I’m often on SQL 2019, not 2022, thought that likely should change.

In any case, this works well for getting a large number of rows, and has the advantage of me being able to set a starting point, so if 1 isn’t appropriate, I can start at 7 or 29 or anything else. I can also set a step to skip some numbers.

I like that this is less code and built in as a function, but only in SQL Server 2022+

Summary

I haven’t given any reason to pick any of these over the other. The post from last week shows that GENERATE_SERIES seems to be slightly faster, but that wasn’t really a comprehensive performance test. I like both method 2 and 3, and in modern version I’d lean towards using method 3 as it’s built in and less code.

I’ll do a performance test elsewhere and write a bit about GENERATE_SERIES and the options available.

SQL New Blogger

This post took me about ten minutes to write, as the code is simple and the longest part was really copy/pasting links and code from SSMS or articles. The rest was quick and easy.

This is a short post that can showcase your learning, and your thinking about different methods. I’ve given a few examples of that above.

Posted in Blog | Tagged , | 1 Comment

Editorial Republish: Horrible Bosses

Maybe I’m actually watching this movie as you read this. After 48 hours in London, I’m heading back to Colorado today.

Actually a little more. I landed at 940a Monday and left at 11:35am today, a whirlwind trip to this amazing city. Not the first time I’ve done it, and likely not the last.

You get to re-read Horrible Bosses as I try to figure out how to get back on a different time zone.

Posted in Editorial | Tagged | 4 Comments