Daily Coping 2 Feb 2023

Today’s coping tip is to challenge negative thoughts and look for the upside.

I’m struggling with some negative thoughts outside of work. This year as I coach older girls, they have many other interests, they’re distracted, less committed, less motivated. Not all, but enough that it’s tough to build cohesion and makes the job of a coach harder.

The upside is that it’s an opportunity for me to learn how I might approach things differently. I’m not sure what to do here, but I recognize there’s an upside, even as I’m unsure and frustrated, even a little down.

I started to add a daily coping tip to the SQL Server Central newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.

Posted in Blog | Tagged , , | Leave a comment

Goal Progress for Jan 2023

The grade for January is a D. Details below, but just not making a lot of progress in these areas.

I set goals at the beginning of the year, and I’m tracking my progress in these updates during 2022.

Reviewing Goals

The various sections are listed below, and I’m giving a SMART grade based on what I listed. Have I gotten much done. January wasn’t too busy for me at work, but a little busy outside of work with coaching ramping up and my wife being down from work for a few weeks.

I got a bit done, but not a lot.

Reading

Two books on the list: marketing and coaching. I haven’t picked a marketing book, but I picked Wolfpack for coaching. With one started and one not, this is an F.

Career

My goals:

  • Set up a tracking system based on the book for my efforts with customers
  • Track and calculate my scores to help me better approach how I work with customers.
  • Review this with my boss and with someone in sales

Rating: D, I rated myself, but no review with others. On me for not getting this scheduled in a timely manner.

Community

  • 2 speaking engagements for community (RMOUG Training Days and SQL Bits are on my schedule)
  • Reach out to 10 SQL Sat groups that did not run an event in 2022 and motivate them for 2023. This should be at least 2 emails to each person/group.
  • Reach out to all 2022 organizers and ask about plans for 2023
  • Start coding a tool for converting schedules to HTML.
  • Hold office hours once in Q1
  • Send 3 monthly SQL Saturday updates to the community

This is a lot. It’s a lot for Q1, especially the emails, as I don’t want to overload any one individual. Two can be challenging. I set up a repo, I did reach out to 9 organizers from 2022, so that’s half. We also have 4 new events on the schedule and 6 previous ones. Given this is only 1/3 of the quarter, I think I’m on a C pace.

Personal

My goals:

  • Use my Power BI report for stats
  • Update the Power BI report based on feedback from athletes
  • Build 6 wooden coasters – I haven’t made enough time for my hobby here, so I’m going to start small. We need more coasters (as noticed over the holidays), so I want to build 6. Same style, different.

The first two are done, but it’s been in the 20s and 30s in Denver, so not a lot of enthusiasm for woodwork. Maybe a C here as I have updated the report and gotten feedback. No coasters though.

Posted in Blog | Tagged , , | Leave a comment

Context Info Across Databases–#SQLNewBlogger

Does Context Info work across databases? This post shows it does.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. Here are some hints to get started.

The Demo

Someone asked the question, would a trigger in another database see context info from a different database. I thought it should work, but decided to test it.

Here I’m going to create a table and trigger in database compare2. This is looking for a context value.

USE compare2
GO
CREATE TABLE TriggerTest (myid INT, mychar CHAR(1))
GO
CREATE TRIGGER tri_triggertest ON  dbo.TriggerTest FOR INSERT
AS
BEGIN
     IF CONTEXT_INFO() = 0x1256698456
         PRINT 'caught'
     ELSE
         UPDATE dbo.TriggerTest
          SET mychar = 'X'
          FROM inserted i
          WHERE i.myid = dbo.TriggerTest.myid
END
GO

Now, back in DB 1, I’m going to set CONTEXT_INFO and insert a value into the database. This should give me a result where the trigger updates the table. A “normal” action.

USE compare1
GO
SET CONTEXT_INFO 0x000
GO
INSERT compare2.dbo.TriggerTest (myid, mychar) VALUES (1, NULL)
GO

This does, as the table contains a 1 and X.

Now, same connection, let’s set the magic value for context and insert a row. Now the trigger should avoid the update, letting my bypass the normal action. This is what someone was trying to do.

SET CONTEXT_INFO 0x1256698456;  
GO 
SELECT CONTEXT_INFO(); 
GO 
INSERT compare2.dbo.TriggerTest (myid, mychar) VALUES (2, NULL)
GO

When I look at the results, I have the “caught” message. The final results from the table are shown here:

2023-01-27 15_05_55-SQLQuery5.sql - ARISTOTLE_SQL2022.compare1 (ARISTOTLE_Steve (53))_ - Microsoft S

As you can see here, the context is with the connection, not the database. The database doesn’t matter for this value, it’s whether or not the connection that sets the context (the session really) is still alive when it accesses the other database.

SQL New Blogger

This was a quick test for me to answer a question and prove this to someone (and myself). I thought this would work, but I spent 5 minutes devising a test. It took me less than 10 minutes to put this post together.

This shows volunteerism (helping someone), testing ability, and diligence to prove something I suspected was true. I didn’t assume, I tested. Lots of employers love that.

You can raise your brand and be a SQL New Blogger like this, showing your knowledge.

Posted in Blog | Tagged , , | Leave a comment

The Rise of Vector Databases

I had never heard of a vector database. I assumed this was a specialist type of database used for a particular problem domain, like a streaming database or graph database. There is a need for specialized platforms in certain situations, but I wasn’t sure what a vector was. The description I saw for a vector database was that they “… are specifically designed to work with the unique characteristics of vector embeddings. They index data in a way that makes it easy to search and retrieve objects according to their numerical values.”

That sounds like any database. However, I saw a few more articles on the hype and then some details about the ways in which this type of database is helpful. Essentially, this is a database designed to store the outputs from various Artificial Intelligence (AI) and Machine Learning (ML) models that examine unstructured data. Things like images, video, audio, and even text are turned into numerical values, or vectors. The vector database is designed to help index and then search these vectors.

What is interesting about the possibilities here is that the entire image, video, or whatever isn’t turned into a single numerical hash of some sort. Instead, the AI/ML process might identify that Steve Jones is in this video. That he is wearing a hat, or that he’s wearing a kilt. If I wanted to search for other videos of Steve Jones, or if this is the type of hat he’s wearing, a vector database can help. It’s much more powerful than simple tags that might be placed on a video because the details of the content are rendered into vectors which can be compared to other vectors. Not for exact matches, but likely ones.

One interesting example in the second link above is that content could be “vectorized” to determine if an apple in the content refers to a fruit or the company that Steve Jobs and Steve Wozniak made famous. Not easy to do with a tag, but more possible with a vector database.

And lots of data. Lots of vectors specifically, whose inventory is growing all the time. As more software is built to analyze unstructured data, and as organizations collect more unstructured data, the need to apply database techniques to this data becomes important.

For those of us working with databases, I’d expect a lot of the mechanics of dealing with a database would still apply. Things like security, backups, and indexing will be needed with vector databases. We’ll get calls about slow performance, missing data, or strange results, and we’ll troubleshoot the system. How we do that specifically might vary, but those are just details we’ll work out.

I like the idea of new databases, which provide more tools, challenges, and opportunities for us as data professionals. I haven’t met anyone using a vector database yet, but I’m looking forward to the day when that happens.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

Posted in Editorial | Tagged | Leave a comment