Lessons from the Phoenix Project–Leave Slack

Not @Slack, but slack time, time when you aren’t buried on a particular project.

In the book, The Phoenix Project, the Brent character is the jack of all trades, the one that everyone goes to to fix and solve problems. He gets tasked with important projects and work, which means he’s always busy. I’ve been in this position, and a few of you are likely depended upon like this at your job.

Slowly, the other characters start to realize that if Brent is fire fighting, or he’s on a long project, then he can’t get other things done. He likes and wants to complete work, something most of us do, which means that unless he has windows to tackle new work, he never gets to new work.

It’s important to break work down and work in small windows. It’s also important to ahve some free time available for anything that comes up. That way if there is something important, you can tackle it without subjecting that item to a long delay. If you also find that some work can be handled by others or isn’t important, you have a break to switch to something else and leave the less important project behind.

This comes from flow of work and theory of constraints, outlined in The Goal, from manufacturing. Ensuring that some resource is always busy doesn’t make sense from an flow standpoint. This is discussed in the book, Slack, as well.

If you haven’t read The Phoenix Project, it’s a quick and easy read. A little silly, somewhat exaggerated, but it makes a point that’s worth making in how we work in technology.

Posted in Blog | Tagged , , | Leave a comment

T-SQL Tuesday #109–Influence

tsqltuesdayThis month’s T-SQL Tuesday host is Jason Brimhall. He asks everyone to write about influence, which is something that I should know about and think I have, but am somewhat uncomfortable writing about.

That being said, here’s my entry this month.

Influencing Others

One of the motivations behind creating SQLServerCentral was to educate and help others. Our goal (Andy, Brian, myself) was to try and find ways that we’d had success, solved problems, or handled challenges and share that with others by writing articles, answering questions, and creating questions for our Question of the Day quiz.

Across the years as I’ve done that, and I’ve written many editorials, there have been plenty of occasions to meet members of the SQLServerCentral community. Many of them have thanked me or talked about how the site and community made a difference in their careers, which is something I’m quite proud of. Helping others is a form or volunteerism and service, and while this is a vocation for me, it’s also a passion.

One of the things I think that I’m most proud of us influencing others to also give back and share. I can only do so much, but if I can also convince others to share their knowledge, there is a level of helping and sharing that grows exponentially to everyone in our #SQLFamily. To that end, I have two stories.

One is about blogging, which I tend to do regularly. I had a friend, someone I met years ago that wanted to start blogging and asked me for some hints and idea. I shared a few things that help me, but also challenged this individual to set a goal for writing. They did, and years later, they have quite a blog. Many of you have heard of this person, but I won’t mention the name here. That’s not important. What is important is they have helped many others and continue to do so today.

The other story is a person I met at a user group meeting. I was giving a presentation and this individual had some intelligent questions to ask. Over a few meetings, as I talked with this person, I realized they were someone with a lot of talent on the data platform and encouraged them to think about speaking. I did this regularly, and while it took over a year, this person now has spoken at many events, including the PASS Summit.

It can be scary and intimidating to share knowledge with others publicly, but it is also immensely rewarding and a way of helping others walk the path you have already covered. Perhaps I’ll influence one of you reading today to share some of what you’ve learned with those hungry to learn.

Posted in Blog | Tagged , , | 1 Comment

SQL Census–From the Redgate Foundry

The Foundry at Redgate Software is our version of Microsoft Research. Kind of. We tackle some projects that are interesting and might make good products at some point, but we’re looking at the in the investigative phase. You can read about the Foundry here.

Some interesting work is taking place in the Foundry and there’s one project that I think is interesting and solves a problem that many of us have had, but I’m not sure how commercially viable this is in the real world.

Maybe you can help us learn more.

SQL Census

SQL Census started as investigation into the area of security, something that is both very simple in SQL Server, but can also become a cumbersome, complex, nightmare.

The work has progressed well, and there’s a product available. Sort of. It’s in the stage where we are trying to decide how to move forward with both future development and starting to sell this. For now, you can get a look at the tool and give us some feedback. We’re really looking for more information about

  • Do you need this for compliance purposes?
  • Will this help you better secure your environment
  • Does this meet permission management needs?
  • Something else?

We’re like to get more users, especially those that have larger environments where there isn’t a single user or role that everyone has.

If you’re interested, read a little about the product and give it a try.

Posted in Blog | Tagged , , , | Leave a comment

Republish: Microservices for Databases

I start my travels to the UK today for SQL in the City. Join me for that on Wednesday, but for today you get Microservices for Databases republished.

Posted in Editorial | Tagged | Leave a comment

The Worst Data Breech

I noticed this week that Australia passed a law that requires companies to hand over user information, even if encrypted. Quite a few articles that point out this might require backdoors to be created in communication systems to comply with the law. Companies are required to provide plain text user communication if they can, or build tools to allow this if they do not have the capability. The proponents of the bill argue this is necessary for criminal prosecution.

Perhaps they are right, but if this capability is required, this means that either companies will have backdoors built into their products that allow them to decrypt things you might have expected to remain encrypted. That’s disconcerting to me, not because Apple, Google, or someone else might read my communications, but because no company has really proven they can protect all the data they store.

Can you imagine how many malicious actors might spent their efforts trying to find those backdoor encryption keys? What if there aren’t backdoor keys, but companies decide to build some sort of key logger into software that copies data before it’s encrypted. Can you imagine how problematic it might be to secure that data?

I’m also concerned because this would mean that there could be a few keys that can be used to get access to encrypted data, something like “master keys” in door locks. In this case, the loss of a key might mean problems for huge numbers of people. The other option would be lots of backdoor keys, potentially a different one for each customer/device, in which case we have a large data set that I’m sure will get leaked. At that time, how likely will it be that we’ll be able to implement new keys for large numbers of people?

I sympathize with law enforcement. In some ways, their jobs are much harder. In others, however, I think they have many more tools, and the need to weaken encryption doesn’t seem to be necessary. Many of us have a need to secure data, to protect it from unauthorized access. At a time when security is proving to be a challenge and record numbers of data breeches are occurring, do we really want tech companies to start building products with less security? I don’t.

Steve Jones

 

Posted in Editorial | Tagged , , | 1 Comment

Automation at Work

I do worry about the future of work for large sections of people. When I read pieces like this one in the Atlantic on automation, there are two things that come to mind. First, we are mindlessly sticking with 19th century models of work in many cases. Second, there are opportunities that could dramatically utilize the leverage of computing power to reduce our need for humans in many cases.

Far too often I’ve seen processes and procedures in place that exist strictly because of historical precedence. We developed some way of working, likely because of expediency. We needed something done, so we found a way for a human to do it. We continue to do it that way, often because of a factory mentality. We don’t trust workers, who come and go, to handle the process correctly, so we specify a way of doing things that we know works. Even if it doesn’t work well.

What’s amazing to me is that many of us still do this in technological jobs. I find lots of DBAs and infrastructure people that still do an amazing amount of manual work to check logs, jobs, backups, etc. They avoid automation for a variety of reasons, but often because of laziness and fear. They don’t want to think and put time into changing a process, both avoiding coding as well as asking permission. They also fear for their jobs, as shown in the article. Automate too much and maybe the company will replace you with a less skilled, far cheaper worker.

Perhaps I’m an outlier, but this has never been something I’ve seen in my career. When I automate things and free up time at work, I don’t sit and browse Reddit play chess, as a few profiles from the article show. Instead, I’m more like Gary. I look for, and find, ways to improve other aspects of the company. I help others. I provide more “value” for my salary. This has worked well, even in companies that had a culture of “just do your job.”. There are always a few managers that want thinkers and doers, not just people that mindlessly move through each day.

Automation is coming, more and more every day. As I look at the evolution of the data platform from Microsoft, the growth and capabilities of cloud services, and even the amazing third party products that free up our time, I know that the bar is constantly raising for the skills we require. What we might have expected only senior level people to do in 1999, we expect juniors to know now. Not everywhere, and certainly plenty of older management is stuck with their historical views of “just do this job,” but times are changing if you seek a new employer.

I want to see more scripting, more PowerShell, more Bash scripts, more DevOps pipelines, more systems doing tedious work. That’s because many of our scripts and our flows are still rudimentary. They’re basic, expecting the happy paths to work, with limited testing and error handling. Instead, I’d like to automate myself out of work, but then find ways to better script more robust processes, with ways that double check my code is working, and alert me when it’s not. With new responses that are more intelligent than  a simple IF..THEN statement.

We have lots of room to improve in how we structure systems and code, whether in application development or infrastructure management. Hopefully we’ll all start to embrace more automation, and look for new opportunities rather than being fearful of change.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.8MB) podcast or subscribe to the feed at iTunes and Libsyn.

 

Posted in Editorial | Tagged , | Leave a comment

MoSQL

Google is doing more SQL, or at least shifting towards relational SQL databases as a way of storing data. At least, some of their engineers see this as a better way to store data for many problems. Since I’m a relational database advocate, I found this to be interesting.

When Google first started to publish information on BigTable and other new ways of dealing with large amounts of data, I felt that these weren’t solutions I’d use or problems that many people had. The idea of Map Reduce is interesting and certainly applicable to the problem space Google had of a global database of sites, but that’s not a problem I’ve ever encountered. Instead, most of the struggles I’ve had with relational systems are still better addressed in a relational system.

Google feels the same way, and in a blog, they talk about choosing strong consistency where possible. This is a post that promotes their relational SQL database (Cloud Spanner), but there is a good discussion of why consistency matters and where moving to popular NoSQL models, like eventual consistency, cause a lot of problems. Both for developers and clients.

This quote caught me eye, and I may use this with developers that look to avoid RDBMS systems: “Put another way, data stores that provide transactions and consistency across the entire dataset by default lead to fewer bugs, fewer headaches and easier-to-maintain application code.” I think that’s true as I think many of the advantages promoted in non-RDBMS systems are often placing a greater burden on the application developer than they realize. A burden that grows over time as the techniques used cause more technical debt.

I think more SQL based systems are the way to go for many problem domains. Google agrees, and if you read more about the Cloud Spanner team, you might agree as well. You’ll also find them to be incredibly smart people that think deeply about the problems that are both raised and solved by relational systems.

So go ahead and promote more SQL Server databases. Google thinks they’re good for many applications, and that’s good enough for most developers.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.1MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | 1 Comment

Lax Security is Harmful for Employment

Manure rolls downhill Since I live on a horse ranch with some slight hills, I can attest this to be true. At least, it’s true for horses and it’s true for short distances. Manure isn’t very friction free and often ceases movement quickly. The same isn’t likely true for bull droppings, but I haven’t done much testing in that area.

Most of us would agree that those that are negligent in their jobs, especially with regard to security, ought to be punished. In some cases, this should lead to termination, though I think many of us technical people would prefer that management who doesn’t budget resources for security be the ones punished.

I mentioned manure rolls downhill, and this article on the after effects of data breaches bears that out. Not only were there record numbers of issues last year, but the typical cost is nearly $4million. That’s likely some very expensive breaches and lots of relatively inexpensive ones, but even the low cost ones probably feel expensive to small companies that experience them. In the lists of breaches I’ve seen, lots of smaller firms (retail, law, etc.) are included, and tens of thousands of dollars might be expensive for them.

One thing that article points out, there are an increasing number of C-level executives being terminated after breaches. I’d like to think that’s good, but I’m somewhat pessimistic that the next hire will find ways to improve security. There are lots of impediments to fundamental change in more organizations, so I suspect this trend leads more to short term employment for CIOs and others, and likely higher demands for salaries because of the risk of security issues inside the company. The further puts pressure on budgets, which is another impediment to better security.

Note that it’s not just IT execs, but non-IT staff as well. Maybe I’ll be wrong and this will make a difference. Of course, IT staff are let go as well, often blamed for issues. There will always be some security issues, but I urge those of you with privileged accounts and access to sensitive data to be careful with your credentials and work to improve security when you see issues. Get written documentation when someone doesn’t allow security changes, in addition to noting your requests. This might not stop a data breach, but perhaps it will give you a better chance of not being blamed for security incidents.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.0MB) podcast or subscribe to the feed at iTunes and Libsyn.

Posted in Editorial | Tagged , | Leave a comment

Basic Sequences–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

I haven’t used sequences much in my work, but I ran into a question recently on how they work, so I decided to play with them a bit.

Sequences are an object in SQL Server, much like  a table or function. They have a schema, and are numeric values. In fact, the default is a bigint, which I think is both good, and very interesting. Since this will implicitly cast down to an int or other value, that’s good.

The sequence is created like this:

CREATE SEQUENCE dbo.SingleIncrement
  AS INT
  START WITH 1
  INCREMENT BY 1;
GO

These can be similar to identity values, and in fact, if I make 5 calls to this object, I’ll get the numbers 1-5 returned. Here I’ve made one call.

2018-12-04 13_19_48-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

This is interesting, as the NEXT VALUE FOR is what accesses the sequence and returns values. I can use this in some interesting ways. For example, if I have to insert values into a table, I can do this:

CREATE TABLE SequenceTest
( SequenceTestKey INT IDENTITY(1,1)
, SequenceValue INT
, SomeChar VARCHAR(10)
)
GO
INSERT dbo.SequenceTest
(
     SequenceValue,
     SomeChar
)
VALUES
   (NEXT VALUE FOR dbo.SingleIncrement, 'AAAA')
, (NEXT VALUE FOR dbo.SingleIncrement, 'BBBB')
, (NEXT VALUE FOR dbo.SingleIncrement, 'CCCC')
, (NEXT VALUE FOR dbo.SingleIncrement, 'DDDD')
, (NEXT VALUE FOR dbo.SingleIncrement, 'EEEE')

When I query the table, I see:

2018-12-04 13_22_50-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

Notice that the sequence number is off by one from the identity. This because I first accessed the sequence above.

The sequence is independent of a table or columns, unlike the identity. this means, I can keep the sequence numbers going between tables. For example, let’s create another table.

CREATE TABLE dbo.NewSequenceTest
( NewSequenceKey INT IDENTITY(1,1)
, SequenceValue INT
, SomeChar VARCHAR(10)
)
GO

Now, we can run some inserts to both tables and see what we get.

INSERT dbo.NewSequenceTest VALUES (NEXT VALUE FOR dbo.SingleIncrement, 'FFFF')
INSERT dbo.SequenceTest    VALUES  (NEXT VALUE FOR dbo.SingleIncrement, 'GGGG')
INSERT dbo.NewSequenceTest VALUES (NEXT VALUE FOR dbo.SingleIncrement, 'HHHH')
INSERT dbo.SequenceTest    VALUES  (NEXT VALUE FOR dbo.SingleIncrement, 'IIII')
INSERT dbo.NewSequenceTest VALUES (NEXT VALUE FOR dbo.SingleIncrement, 'JJJJ')

After running the inserts, I’ll look at both tables. Notice that the values for the sequence are interleaved between the tables. The first insert to the new table has the value, 7, which is the next value for the sequence after running the inserts for the first table.

2018-12-04 13_27_14-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

In these tests, I’ve used 11 values so far. I can continue to use values, not just for inserts, but elsewhere.

2018-12-04 13_34_02-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

This behavior is both fun, handy, and useful, but also dangerous. These values get used when I query them, whether the inserts work or not. Here’s a short test to look at this:

ALTER TABLE dbo.SequenceTest ADD CONSTRAINT SequencePK PRIMARY KEY (SequenceTestKey)
SELECT NEXT VALUE FOR SingleIncrement
SET IDENTITY_INSERT dbo.SequenceTest ON
INSERT dbo.SequenceTest VALUES (NEXT VALUE FOR SingleIncrement, 'ZZZZ')
SET IDENTITY_INSERT dbo.SequenceTest OFF
SELECT NEXT VALUE FOR SingleIncrement

This gives me an error:

2018-12-04 13_36_52-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

and I can see the last SELECT has the next sequence value.

2018-12-04 13_36_45-SQLQuery6.sql - dkrSpectre_SQL2017.sandbox (DKRSPECTRE_way0u (55))_ - Microsoft

There are a lot more to sequences, but I’ve gone on long enough here. This is a good set of basics to experiment further, which I’ll do in future posts.

SQLNewBlogger

This post went on longer than expected, and it was more of a 15-20 minute writeup as I set up a couple quick examples, tore them down, and rebuilt them with screenshots for the post.

This is a place where I can show I’ve started to learn more, and by continuing with other items in this series, I’ll show some regular learning.

Posted in Uncategorized | Tagged , | 2 Comments

SQL in the City is Next Week

Next Wednesday, Dec 12, I’ll be back in the UK with Kathi, Kendra, and Grant for SQL in the City Streamed. You can register today and join us for a set of DevOps talks that will get you to think about ways to better perform database development.

Steve-email signature

For once I have no demos. I’ll do the keynote, looking at the State of Compliance Database DevOps. I’m going to summarize some of the results and findings from other surveys, as well as various observations from my conversations with customers and colleagues at various events this year.

There are lots of other sessions, where we talk about deployments, security and compliance, and a some Redgate tools.

Join me and register today for a nice break from work.

Posted in Blog | Tagged , | Leave a comment