Being Responsible for Data

For much of my career, I’ve run SQL Server Central. A large part of the popularity of the site is from the forums, where people can pose questions about their struggles with SQL Server and get answers from the community. There are also some off-topic forums, where people discuss various things outside of databases. In here, we have discussions a about life, sports, and more. While we do expect people to maintain an air of professionalism and respect others, we don’t try to moderate content.

That’s how much of the Internet has worked, with various sites allowing users to post content, but not having any responsibility for what has been posted. The liability for that lies with the person doing the posting, which creates a thorny issue when users post anonymously. Setting that aside, I’ve been a proponent of this, not believing that Facebook or LinkedIn, or SQL Server Central ought to be liable for what users write and post. I do thinks users bear that responsibility.

However, in the US, there is a Supreme Court case that may change our view, and that of many others. This case deals not with the data itself, but rather the algorithms that might display or recommend some of that data to others. That’s an interesting approach to the case law that has shielded many tech companies from their users’ poor behavior. Essentially the plaintiffs argue that Google and Twitter bear responsibility for their algorithms, which in this case aided terrorist recruitment. Meaning that the code they wrote to analyze data, essentially the queries that promoted content to users, were harmful.

There are four possibilities listed in the article for what could happen, and I find them fascinating from a data analysis standpoint. Essentially a ruling against tech companies could shape how many of these companies process data in the future. While we might like to ensure these companies do not promote harmful content, think about this from the data analysis view? Do you want these companies to moderating how they provide results? Would this mean that we need to more carefully craft our search terms? In the context of tremendous floods of information, we often depend on Google, Bing, or some search algorithm to distinguish among the various meanings of words to bring back results relevant to us. At the same time, we might wish that everyone got the same results from the same search terms.

Separate from the results, what about related results, or suggested items that might be related. I find the quality of these can vary for me, but often there is something “sponsored” or “I might like” that is helpful to me. Or just interesting. The infinite scrolling that many people live, getting similar recommendations is a double edged sword. It can increase learning, pleasure, etc. It can also send someone down a rabbit hole of anger and reinforcement of negative emotions. I think this also is one way that the content of the Internet creates division and disagreement among many.

While I think users are responsible for their words, I also think that the way that these companies recommend and showcase content likely bears some responsibility. At the same time, I can’t imagine how you regulate this, and I do not want to see a constant battle of lawsuits over how we interpret rules. The sex, drugs, and rock and roll issues of the past, where we tried to legislate morality, didn’t work well. I don’t want to see that again.

There isn’t a good answer here for me, and of the four possibilities, I fall somewhere between two and three. Some changes to section 230 (the legal writing) but not heavy changes or an abandonment of the way this has been interpreted. What do you think? Should we start to hold companies responsible for how they present content? I don’t know I worry for SQL Server Central, but it might change other sites. For us, we just show things from the last 24 hours. It’s not much of an algorithm, but it is one that likely isn’t going to get us sued.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

Posted in Editorial | Tagged | 2 Comments

Capturing Data Changes

While working with a customer recently, I saw some code from them that used the OUTPUT and INTO clauses of an UPDATE statement to capture the changes made into another table. In this case, as users updated code strings in a table used in dynamic SQL, the developers wanted to capture the history of those changes in another table, giving them a rollback strategy if there were problems. This is an interesting, and focused way of auditing data changes, albeit with the work to write this code and control updates through stored procedures.

This is one way of tracking the history of data values in many database platforms. Temporal tables are another, and we have CDC/Change Capture available in SQL Server. In the past, many people have used triggers. All of these work, with various pros and cons. In many cases, I have seen triggers used, often because many developers know how to write them and they are easy to create. Easy to get wrong as well.

However, triggers take some work, while platforms have often built capabilities that make it easy to capture data changes and track them. Many developers often aren’t aware of these features, or haven’t spent sufficient time with them to know how to work with them or if these features even work well. This is one reason many of us write about new features, to learn about them, experiment, and help others to understand how to use them. Of course, not all features turn out to be as great as marketed.

Today I wonder how you capture changes and audit them. Do you use triggers? Something built in? For limited auditing, and with control of the code, would you use the OUTPUT clause with your insert/update/delete code? Actually, I wonder how many of you would even consider this for limited auditing, especially with the large number of tools and frameworks that might generate their own UPDATE statement rather than call a stored procedure where you control the code. Or do you think this isn’t a good way to capture this data.

Steve Jones

Posted in Editorial | Tagged , | Comments Off on Capturing Data Changes

How Important is Zero Downtime?

As I work with more and more customers at Redgate, I see some interesting trends. During the pandemic (and prior), we got a lot of questions on zero downtime and how to achieve database DevOps without causing problems. Those are always interesting discussions, and I find many people want magical solutions without having to change the way they work.

The last year, however, has had more people looking to implement database DevOps and speed up their development, but not a lot of questions or demands for zero downtime during these deployments. I find that interesting as the world depends more and more on computer systems, and the customer base for many organizations may demand access to the systems at any hour of the day or night.

However, it doesn’t seem that as many people are concerned about small moments of downtime. Does this mean that more organizations aren’t measuring uptime anymore? Perhaps the interruptions caused by software deployments aren’t being counted? Or maybe the application software has gotten better at hiding blips in database access. Perhaps feature flags are catching on as a standard practice, so database deployments are less troublesome.

I’m not sure what has changed, but it has been noticeable by me that the importance of making changes without downtime has not been a requirement from many customers. Is that the case for many of you reading this? Are you less concerned about downtime? I think one nice thing about the move to the cloud is it’s a little less stable, and perhaps that has lowered some of the expectations of our management. Since it’s out of our control, maybe we shouldn’t be too concerned about the need for retries, either automatic or a customer pressing a button again.

Let us know today if you feel pressure to get closer to zero downtime, either in your everyday management of databases or during deployments. Or maybe tell us if you’ve gotten so good at your job that no one every notices when you do make changes.

Steve Jones

Posted in Editorial | Tagged , | 1 Comment

Daily Coping 24 Mar 2023–The Final Tip

Today’s coping tip is to discover the joy in the simple things in life.

This is the last coping tip for now. It’s been 3 years, and I hope you’ve enjoyed them.

I have really tried to enjoy simple, little things in my life and travels. A text from one of my kids. A short chat with an athlete I coach. A conversation at a SQL Saturday or other event. The chance to share a picture, or see one, of a friend.

Simple, little things make life interesting and wonderful.

When I finish a book I enjoy. A small success in a game. Learning a new song or lick on guitar, or even having a single song play well. A tasty meal. A laugh from my wife.

Looking at the world and enjoying little things helps me cope with the hard things, or the upsetting things. Remembering little joys, or noticing them, can reset my attitude or dampen other negative emotions.

Try appreciating and enjoying something small. I bet you feel happier in the rest of your life.

I started to add a daily coping tip to the SQL Server Central newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.

Posted in Blog | Tagged , , | 2 Comments