Blocking is the enemy of any great future for all digital social life. We need to block blocking.
There are plenty of reasons to criticize today’s social (media) platforms and channels, the organizations running them, and those owning these organizations. However, critics regularly miss the most relevant aspect: the industry’s united love for blocking features.
Like of late, Elon Musk is again doing what he is best at by challenging a status quo. This time, he announced removing the blocking feature from his Social Media platform X (formerly Twitter).
A step that, if executed, could do more for a positive future of humanity in the long run than it might give the impression at first glance. There has been little fundamental product innovation in the Social (Media) space since the sector took over the #1 position as the carrier for entertainment, news, and opinions globally in just under 20 years timespan. The industry is stuck in handling its most significant challenges, with content moderation — the root of most of their problems — topping that list by a wide margin. (Coming with this is also the issue of Spam and Spam accounts, but I leave that aside for the moment.)
A challenge that goes far beyond the usual suspects (e.g., X, Facebook, Instagram, the whole bunch of Messenger Services like Signal, Telegram, and WhatsApp): Even relevant gaming and content platforms like Playstation, Video giant YouTube, or Twitch can’t do without, no fashionable digital advertisements-campaigns for consumer goods these days happen without including Social elements, Chats, digitally build “Community” efforts. Almost everything we know in the VR- and AR space today (remember the Metaverse? — it’s not dead!) also relies on solid social elements to enable a prosperous future for itself.
Nevertheless, no one is talking — not to mention taking action — about the consequences if the vast majority of people act in digital spaces that delude them into believing that a significant aspect of our social contract as humans walking the planet can indeed be ignored, that existence based on ignorance and blinding out your disliked neighbor is a sustainable way forward for any community. Acting this way is a mistake. We can’t truly “block” people in real life. We can only distance ourselves from them. Hence, we should only act the same in digital, virtual worlds.
Expectedly, Musk attracted a lot of criticism from many voices for his plan. IMHO, some are understandable but, in their essence, primarily self-serving. Not being able to block unwanted followers and debaters would make maintaining their social channels more work-intensive.
However, the current standard “only blocking helps” mantra repeated everywhere is a big part of this stuckness of all Social Media industries and businesses related or connected to it. Here is why:
Blocking creates bubbles.
Blocking is the thing that enables filter bubbles like no other UI feature on Social Media. It essentially supports bubbles in two ways:
I — Out of sight, out of mind.
II — People who want to remain a part of the conversation, especially on opposing views, refrain from commenting out of fear of being blocked by having a different opinion.
Hence, conversation, debate, and a proper exchange of different points of view decline.
The “fake worlds”.
Once a user has blocked out everything that appears to his dislike, the impression of a fake “okay world” is established within the created bubble. In this cozy yet illusionary space, there is no opposition to anything from anyone people say or do.
Such a situation then starts an even more and more vicious circle. The less opposition one experiences on Social Media, the less tolerance toward other opinions is exercised, and the quicker people hit the “block user” button again if some new resistance finds its way into their bubble … And, further down that rabbit hole, block by block, the conversation medium becomes the confirmation medium.
After almost 20 years of Social Media being a part of our day-to-day life, filter bubbles have become THE giant thread social media is posing on humanity.
Social Media, everything chatting and virtual needs community needs a reality check.
Because analog life does not work at all the way digital bubbles work right now. Analog life is about getting along with your neighbor, even with the most complicated characters. Analog life is about having minimum social skills for interacting with other humans. Yes, there might be arguments, and yes, there might even be fights. But a big part of a human’s real-life social skills is about managing conflicts of any kind, big and small, light or heavy ones. So, every time one user blocks another on social media, it amplifies — from a community POV — unwanted behavior.
But what about
online stalking, online harassment, extensive photo filters, fake news content, rude spam, troll comments, deep fakes, use of foul language, etc., etc., etc.? Yes, these all are severe problems on social media platforms for sure. No doubt.
But in essence, they all could only grow so big because platform owners still favor the one easy way out of anything going wrong on their platforms as their #1 solution: blocking. And they do so because it’s an easy and cheap solution for them. And it’s comfy as it enables them to avoid innovating content moderation and take it more seriously. Albeit, it’s not good for humanity and, at the end of the day, for their product itself.
If social media companies took (or, instead, would be forced by lawmakers to take) a more authentic analog life-like approach to making sure people need to dispute to deal with each other, in other words, to truly police what is going on within the chats on their digital platforms, social media would be of far more use for humanity than it is now.
Because it’s not just us, the users. Social Media companies themselves use blocking as their #1 solution to almost anything going wrong on their platforms. They block content and people that go against their “community standards” (whatever such “standards” are, after all).
This way, each Social Media platform and each “comments” section anywhere online is constantly running the risk of becoming more and more like another version of a brave new world. A world that eventually hardly makes sense to exist online as it just feeds the illusion of separation being a natural solution to anything. And this only works once the point where the disparity of on- and offline worlds will not be able to coexist anymore at all. While in some contexts, like those for entertainment or taste purposes only, such escapism might be okay (e.g., no use in having GTA and Roblox fans torment each other in foul language over the question of what might be the “better” game again and again). But it is not helpful in any form when debating everything politics in the broadest sense possible.
Especially as it could work out better than how things are done so far. Market leader Zuckerberg, the uncrowned King of all Messengers and Social Media, has been promising us for a long time that technology would fix the problems of filter bubbles, ugly comments, and online harassment on his platform.
Meanwhile, Facebook engineers are still determining if the AI way forward answers all problems regarding content moderation. Users might agree with them. Anyone using the platforms regularly might have stumbled over at least some absurd, bizarre notifications about “ignored community standards” or “limited accounts” due to the usage of some standard wording being misinterpreted by technology as a reason to interfere. Sure, as in analog life, it is almost all about context. No 1-Solution-Fits-All-Approach makes sense offline, so legal norms tend to be complex for good reasons. To believe this does not need to apply online is pure hybris.
What’s the way out?
Companies running social media operations need to moderate the content on their platforms more intensively than they do now. While some efforts have already been made more recently, especially on Facebook and Twitter, more needs to be done.
Indeed, there is always the question of when moderation turns into censorship. Of course, the trick will be to do the former but to avoid the latter. But it can be done, even perfectly, if there is intention behind it.
Blocking should always be the last resort.
The decision to block should involve a more complex decision-making process than just one click. Society has developed a lot of rules for the decision-making process of locking someone up or out offline over the centuries. There is no reason that companies shouldn’t be required to apply the same rules online.
And, yes, to improve moderation, to make any blocking more of a balanced, deliberate decision instead of a moody act of laissez-faire will need innovation and cost the companies money.
AI will help along the way — more and more, eventually getting all the content moderation right, at least most of the time. But for a while longer, content moderation will require human oversight. And, even once the AI gets the content moderation in check, blocking still creates illusionary worlds if made as easy as it is now.
But by taking on this investment, eliminating blocking from social and chats and comments sections as much as possible, Social Media businesses secure their place in the future. Otherwise, eventually, these platforms will eat themselves up with fewer and fewer valuable interactions happening.