Social networks are broken, and they are on purpose
All my articles by topic.
Anyone who has frequented social networks like Facebook for a few years can confirm that they have witnessed an average deterioration in the content and quality of communication.
Over time, the objectives of the best-known social networks have evolved to pursue commercial opportunities and today, they no longer coincide with providing platforms for the circulation of quality ideas, but with the creation of a virtual entertainment space capable of maximizing user browsing time.
The average time spent by a user on a social network is in fact one of its main indicators of value, just think that the main world network in terms of turnover and users, Facebook, earns (as someone calculated) 1 cent $ for every minute of navigation of the average US user and 0.3 cent $ for every minute of the average EU user, all multiplied by a worldwide audience of 2.7billion of users. Any small change in user experience that decreases dwell time by a few percentage points can lead to hundreds of millions of dollars devaluing Facebook’s stock.
This article talks about social networks but many of the concepts expressed are also applicable to social media. The two terms are closely related even though social networks are focused on providing tools that enable interaction between people while social media focuses on content interaction.
Why social networks are broken?
The statement is provocative, not all social networks are “broken”, and many focus on the quality of the content. However, the main platforms have found convergence of intent in maximizing profits. The main goal of a social network is to keep your users engaged with ever-changing content. Generating such content is expensive, and content generated by friends, colleagues, and acquaintances is often more attractive. Here because in social networks the most effective content is generated by the users themselves. Sometimes the creative process is stimulated by companies and players cultivating their own brands, however, the bulk of content, comments, and reactions mostly come from visitors, and everyone must feel comfortable expressing themselves: restrictions on reactions reduce opportunities to post new content, and less content translates into less browsing time for everyone.
The commodity of social networks is the time users spend on pages, the more time they spend on social network, the more they can be exposed to sponsored content, and the more the social network earns and takes on commercial value.
Whether this time passes in a constructive or destructive way is not important, Why the value of a social today lies in the amount of time spent on it, not in the quality of the user experience.
We can sum it all up by saying that: your interactions on FB are just a side effect between one administration and another of sponsored content.
Put users at ease
So on social network, to have a constant flow of content, everyone must be comfortable. And how is this result achieved? Removing the possibility of expressing negative feedback.
Negative feedback is very common in social network that aim for quality content (Quora, Reddit, Stackoverflow) On the most popular social networks such as Twitter and Facebook, no disapproving reactions can be expressed (the so-called verse inches or downvote). You can only cast positive votes (thumbs up or upvote) or “reactions” (the so-called “smiley faces”) introduced in 2017 to give increasingly bored and disinterested users more expressiveness in post reactions, but without exaggerating.
In 2015 Mark Zuckerberg said:
“We didn’t want to just build a Dislike button because we don’t want to turn Facebook into a forum where people are voting up or down on people’s posts. That doesn’t seem like the kind of community we want to create.”
So when someone writes something unethical, offensive, or inappropriate, other users can express anger, sadness, or a laugh, but the result is that these posts gain visibility instead of losing it.
UPDATE: since April 2021 FB has started experimenting with a form of up/down voting on some groups, other tests have been conducted since 2018, more details can be found here and here. At the moment, however, there does not seem to be any interest in extending this functionality into public social network interactions.
The consequences of an environment where all opinions emerge
The effects have become particularly evident during the Covid pandemic. In particular the spread of conspiracy theories on any topic, even on extremely technical arguments, of which average users have little understanding, opinions based on a mix of suggestions, fake news, and pseudoscientific disciplines. These flawed opinions have begun to gain critical mass, and it is already widely documented how the interaction between people who share paranoia feeds forms of collective psychosis.
The critical mass of unhealthy ideas
Social networks allow distant people with similar ideas to find each other, support each other, and feed their convictions. So far everything looks wonderful. Problems come when these ideas are based on incorrect assumptions (generally fake news, hysteria, and distrust in institutions), and the platforms on which they travel do not provide tools to counteract this diffusion.
It was thanks to social network that anti-vaccine (the result of fake news resulting from an article with experimental data ascertained as falsified), flat-earth movement (more trivially the result of some people’s sense of inadequacy), or the NO5G movements (again fake news generated from the interpretation of pseudo-scientific of some research) are now a reality.
To all this is added the all too simple possibility of creating fake profiles, providing unverified general information that removes responsibility from users and is frequently used to convey fake news or false supporters, capable of raising hysteria or influencing political elections.
Counter misconceptions
Today, users who want to contest unwelcome content on social network such as Facebook can only comment or react to that content, helping to increase its visibility. It would, however, be sufficient to be able to vote negatively on certain pages or certain posts with socially harmful content to make them less relevant, as well as provide quantitative feedback to those who produce such content.
Going back to the anti-vaccine example, a 2019 survey says that only 8% of Europeans expressed an anti-vaccine position, if these percentages reflected the distribution of users on the main social networks today, the pages dealing with anti-vaccines would receive 9 negative votes for every positive, soon ending up disappearing.
the right of opinion on social network
This social misunderstanding is at the root of many of the destructive dynamics observed on social networks. In reality, it is absolutely not true that anyone can express specialist opinions. For example, expressing opinions in the medical field, without having proven competence in the field of discussion, risks inducing some people to put their health at risk. Then there are those aspects of knowledge that are simply not debatable, such as the fact that the earth is flat or that the administration of a vaccine can induce mind control.
How social networks could be improved
It would be enough to copy what other social networks like YouTube, which it has always supported, do upvote It is downvote on posts, or platforms niche such as Reddit (which to be precise is a social news platform), and which has established itself as a forum for discussion on vertical topics with quality content.
In these environments, when a user receives upvotes he emerges and becomes more visible, new users discover his opinions and can react to them. When a user receives disapproving reactions, on the contrary, his posts become increasingly irrelevant until they disappear from feeds.
Content moderation
The problem of moderating open community content was already addressed and solved in the 90s and 2000s, in environments such as IRC communities. The channels are managed by a group of moderators that expands over time on merit, the moderators can block or delete content. There are also automatic systems that verify the simplest infringements, finally, there is the possibility for users to provide feedback on individual contents, where precisely the contents voted positively become more visible and those voted negatively less relevant.
Reporting content is not the same as moderation
Reporting content is not the same as the downvote mechanism. Some content that violates the terms of social network (for example incitement to hatred or violence) can be removed, while other content, such as those on anti-vaccines, is often legitimate from a legal point of view even if harmful from a social or legal point of view.
Furthermore, one cannot delegate to an operator, perhaps overloaded with work and external to the social context in which the discussion takes place, to cover the role of “censor” and to establish what is appropriate or not to discuss.
Content flags work well for groups/pages that fall into very specific categories and are still applicable when there is an obvious crime. But the same effect of reporting can be achieved by introducing negative feedback.
The censorship of content on a social vehicle cannot be delegated to private (and foreign) initiatives used today by political forces, citizens’ associations, entrepreneurs, and individuals, this role must be placed in the hands of the users themselves.
Just as in the 1990s the principle of Net Neutrality, (essentially a set of principles whereby Internet service operators should provide their services without discriminating on content), today we should promote a similar principle for social platforms.
Today 50% of political communication in Western countries passes through the main social networks (Facebook and Twitter). It is estimated that in the next 5 years, social networks will convey 80% of the world’s political information, do we really want the regulation of this information to remain under the control of a private censor listed on the stock exchange?
There are already disturbing suspicions and evidence of national elections (for example US 2016) and referendums (Brexit) whose results have been influenced by massive disinformation campaigns waged on social networks.
How long would it be before we saw the establishment of “marketplaces” to “buy” bans on social networks targeting unwelcome subjects and open for buyers such as a government, a politician, or a company that has something to quickly forget?
The only delegates who have the natural right to establish what is right or appropriate or interesting to deal with are the users themselves, through the democratic exercise of their vote.
Regulating social networks is possible, in fact, we already do it
Social networks could evolve into a powerful tool for comparison and social growth if only they became more balanced tools. Personally, the only solution is to ask the institutions to regulate some aspects of user interaction with the platform.
Someone will turn up their noses or smile at the idea of involving the institutions in defining how a social network should work, but these “interferences” happen all the time (and fortunately), just think of the GDPR, among the latest examples.
In fact, it is the task of the institutions to decide how social networks and digital services manage aspects such as the protection of privacy and personal data, the protection of contents covered by intellectual property, verbal violence, etc.
If all of this can already be regulated, why don’t we ask to regulate the mechanisms that guarantee the quality of information?
Some embryonic experiments
There is good news, however. Emerging blockchain technologies can accelerate the transition to healthier social networks, the problem is that these technologies are not yet mature enough for mass adoption.
There are already several experiments in this regard.
Mastodon
Mastodon is a cost-free and open-source software designed for establishing self-hosted social networking services. Similar to Twitter, it offers microblogging capabilities through independently operated nodes known as “instances.” Each instance establishes its own rules, policies, and moderation protocols. While users are associated with specific instances, they can still engage across various instances, forming a federated social network. Mastodon is a component of the Fediverse, a network of compatible servers such as PeerTube, Friendica, and Lemmy. Financed through crowdfunding, Mastodon emphasizes privacy and lacks advertising content. Created by Eugen Rochko in 2016, it gained substantial traction in 2022 following Twitter’s acquisition by Elon Musk. The initiative is managed by the German non-profit organization Mastodon gGmbH.
Steemit
Steemit is a partially blockchain-based blog and social media platform that rewards its users with the STEEM cryptocurrency for posting and curating content and is owned by Steemit Inc. a privately held company headquartered in New York City It is an exciting experiment that shows how it is possible to build new social network formats where economic interactions are transparent and capable of stimulating the creation of content.
Basic Attention Token
The Basic Attention Token (BAT) is an open-source, decentralized Ethereum-based ads exchange that works in tandem with the browser Brave, derived from the more popular Chromium.
Although BAT is more related to advertising, it provides a very promising model that can be adapted to content creation and consumption in social networking.
Brave is a free and open-source web browser developed by Brave Software, Inc., which in addition to blocking traditional banner ads and trackers, provides users with a way to submit cryptocurrency contributions in the form of “Attention Tokens”.
Since April 2019, Brave browser users have been able to opt-in to the Brave Rewards feature, which sends BAT micropayments to websites and content creators. Site owners and creators must first register with Brave as an editor. Users can activate an automatic contribution, which is grouped into a monthly contribution proportional to the time spent on the various sites, or they can manually send a chosen amount while visiting content.
Users in turn can choose to earn BAT by viewing advertisements that appear as system notifications on their computer or device. Ad campaigns are matched to users by inference from their browsing history; such targeting is done locally, with no transmission of personal data outside the browser, eliminating the need for third-party tracking. Additionally or alternatively, users can buy or sell BAT through Brave.
How blockchain can improve social networks
The basic principle is very simple: the blockchain makes it possible to revolutionize the current model where the user is treated as a raw material and to initiate him towards a complete transition towards the prosumer, where users are active actors who benefit financially in the production of content. The blockchain is able to guarantee transparency and profit margins for users by enabling new (semi) decentralized social platforms.
There are already some early attempts aiming to introduce mechanisms of micro pay for users who produce content of interest and for users who consume content sponsored.
By breaking the vicious circle of comfort zones for everyone, you could really aim for quality at the expense of user time, going to recover a more balanced feedback mechanism based on interest in the contents.
The creation of quality content goes through a scoring system (upvote + downvote) and active moderation of the same. It would therefore be very useful to have a system of economic incentives to support a pool of moderators and curators of content on social networks.
The certification of information sources could favor the contrast of fake news, the spread of fact-checking, and the possibility of having unfalsifiable and legally binding pseudo-anonymous identities would make more responsible social interactions.
These changes are currently hampered by the level of scalability of blockchain technologies and they will still require years to be able to assert themselves. However, they could be accelerated by an institutional push that is more attentive to the quality and freedom of information in virtual social contexts.
Conclusions
Social networks are destined to become preferential sources of political information for all citizens, and today they have not proved to be up to the task. On the contrary, they have lent themselves to the manipulation of public opinion and are a privileged tool for the dissemination of disinformation, a very dangerous tool when used by organizations and governments hostile to the democratic order.
If at first, it may seem naive to think of defining by law how a social network should work, it is reasonable to think that the rules of virtual coexistence will be regulated as those in the physical world are today.
After all, many aspects of the substance of content (threats, defamation, restricted or prohibited content) are already heavily regulated, why not also regulate the aspects relating to virtual interaction?
Thank you!
If you’ve made it this far, you’ve probably enjoyed my article. Why don’t you leave me feedback, like a comment or applause? If you’re new to Medium, you probably won’t know that a click on the applause button is only worth 1/50 of the top grade.
References
https://www.fastcompany.com/90513504/social-networks-are-broken-heres-the-secret-to-rebuilding-trust
https://thriveglobal.com/stories/social-media-is-broken-heres-how-to-fix-it
https://www.theguardian.com/media/2021/jan/16/how-to-fix-social-media-trump-ban-free-speech