For much of my career, I’ve run SQL Server Central. A large part of the popularity of the site is from the forums, where people can pose questions about their struggles with SQL Server and get answers from the community. There are also some off-topic forums, where people discuss various things outside of databases. In here, we have discussions a about life, sports, and more. While we do expect people to maintain an air of professionalism and respect others, we don’t try to moderate content.
That’s how much of the Internet has worked, with various sites allowing users to post content, but not having any responsibility for what has been posted. The liability for that lies with the person doing the posting, which creates a thorny issue when users post anonymously. Setting that aside, I’ve been a proponent of this, not believing that Facebook or LinkedIn, or SQL Server Central ought to be liable for what users write and post. I do thinks users bear that responsibility.
However, in the US, there is a Supreme Court case that may change our view, and that of many others. This case deals not with the data itself, but rather the algorithms that might display or recommend some of that data to others. That’s an interesting approach to the case law that has shielded many tech companies from their users’ poor behavior. Essentially the plaintiffs argue that Google and Twitter bear responsibility for their algorithms, which in this case aided terrorist recruitment. Meaning that the code they wrote to analyze data, essentially the queries that promoted content to users, were harmful.
There are four possibilities listed in the article for what could happen, and I find them fascinating from a data analysis standpoint. Essentially a ruling against tech companies could shape how many of these companies process data in the future. While we might like to ensure these companies do not promote harmful content, think about this from the data analysis view? Do you want these companies to moderating how they provide results? Would this mean that we need to more carefully craft our search terms? In the context of tremendous floods of information, we often depend on Google, Bing, or some search algorithm to distinguish among the various meanings of words to bring back results relevant to us. At the same time, we might wish that everyone got the same results from the same search terms.
Separate from the results, what about related results, or suggested items that might be related. I find the quality of these can vary for me, but often there is something “sponsored” or “I might like” that is helpful to me. Or just interesting. The infinite scrolling that many people live, getting similar recommendations is a double edged sword. It can increase learning, pleasure, etc. It can also send someone down a rabbit hole of anger and reinforcement of negative emotions. I think this also is one way that the content of the Internet creates division and disagreement among many.
While I think users are responsible for their words, I also think that the way that these companies recommend and showcase content likely bears some responsibility. At the same time, I can’t imagine how you regulate this, and I do not want to see a constant battle of lawsuits over how we interpret rules. The sex, drugs, and rock and roll issues of the past, where we tried to legislate morality, didn’t work well. I don’t want to see that again.
There isn’t a good answer here for me, and of the four possibilities, I fall somewhere between two and three. Some changes to section 230 (the legal writing) but not heavy changes or an abandonment of the way this has been interpreted. What do you think? Should we start to hold companies responsible for how they present content? I don’t know I worry for SQL Server Central, but it might change other sites. For us, we just show things from the last 24 hours. It’s not much of an algorithm, but it is one that likely isn’t going to get us sued.
Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.
I don’t like the idea of any more government than is absolutely necessary. That said this situation we’re all facing now was brought on because of some in the tech industry abusing 230, trying to be both a publisher and platform. Claiming they can’t be held liable for what users post yet they can also filter it and I don’t mean just thee stuff that is illegal but content someone at the company doesn’t like such as political speech. I like SQL Server Central’s approach off live and let be as long as it’s not something illegal or threats. I would hope no one would use SQL Server Central’s forums as a place to discuss politics but I like the idea that as long as it’s not anything illegal or threats you guys take a non-moderating approaching.
The difference with platforms/sites like Google and Facebook isn’t that they have algorithms that try to help users find content relative to the user but that they use those algorithms to promote certain information they want promoted and to demote or outright censor content they disagree with regardless if it’s legal or not. Had Google and the rest not tried to manipulate in a biased manner the content users see I doubt we’d be in court over this. It’s the big boys like Google who have now put a target on everyone’s back like you guys and it’s not fair but where here now and so something has to be done. I do hope they can find some reasonable middle ground because no platform or site should be biasing results unless its something the user wants or has requested.
FYI – Love the SQL Server Central site. Lot of good people there like Jeff Modem . It’s always been (since I got into SQL) my first stop for answers. I even got a few responses from the man himself, Joe Celko.
I do agree these sites have some liability here and are trying to be platform and publisher with algoriths. That being said, there’s no legality on their decisions to show content or promote it. They are private, so there’s no legal issue on making decisions.