top of page
Search

Six hours downtime was the second worst thing to happen to Facebook last week

  • Writer: Amer Loubani
    Amer Loubani
  • Oct 12, 2021
  • 4 min read

You would've thought the loss of Facebook (along with Instagram and WhatsApp) for 6 hours, a 4.9% drop in share price and $6 billion shaved off Mark Zuckerberg's net worth was the worst that could happen to a social media giant in one week. Instead, the revelations delivered by Frances Haugen (a Facebook product manager turned whistle-blower) in front of Congress took centre stage. Ms. Haugen painted the firm as an irresponsible power that traded user safety for profits when dealing with the regulation of online content. Discontent with Mr Zuckerberg, his responses to the two crises, and criticism of his and FB's "monopolistically irresponsible" tendencies have created a vortex of criticism centred on how much the social network and its sister platforms actually care about stemming the flow of harmful and misleading online content.


This naturally led me to consider some ideas regarding how Zuckerberg and his platforms can clean up their act, moving them back to a global phenomenon rather than pariah. My thoughts in this blog apply to all social media, outlining how these sites can be a force for good rather than a tool for fuelling misinformation and abuse.


Recognising the people

Social media has become an important medium of discourse, this is something every tech chief must face. Just as debates and disagreements take place in the spoken media domain, massive debate has shifted to social networks like Facebook and Twitter - as has the potential to mislead and abuse. Accepting this is the first step to acknowledging the problem, and just like lawmakers have a hand in controlling debate in the physical domain (Including regulation of news media), they must also become actors in the social media space. For better or worse, it seems CEOs aren't exactly the best people to drive change in favour of justice.


Currently, none of the major social media networks available to us today provide a method of acquiring a user's identity in the event of social media misuse. This must be part of the conversation if social media companies are genuinely seeking to avert barrages of misleading/abusive posts. Lack of anonymity deters abusive content, an issue all social networks have failed to adequately control. A control such as this would also prevent duplicated (bot) accounts feeding online conversations with misinformation. In its current form, Facebook resists identity checks to ensure its usage appeal doesn't reduce - but this is an irresponsible way to lead. Zuckerberg leans on algorithmic methods of identifying posts/accounts that violate Facebook's abuse policy, but since this technology finds it hard to detect authenticity, and largely only reacts to reports made by other users (which isn't always reliable) it leaves the network lagging behind. Human actors are still employed, but in a quantity too little to make a difference. While Mr Zuckerberg is unwilling to make any additional manpower commitment, it lands him in a catch-22, and the only obvious option left is to control the source. Authenticating users is a sure-fire way of matching a person to their contributions, and while deterring users from saying thigs they'd rather not attribute to their name it doesn't entirely solve the issue. People can still mislead, and without a targeted human effort at controlling the conversational domain social media stands little chance at changing the direction of travel.


Apart from identity checks, making things like 2-factor authentication mandatory for account creation can also reduce the number of ingenuine accounts - making mass postings from unverified sources less common in the social media space.


The algorithm sucks

Part of what Ms. Haugen touched on during her remarks at congress addressed how content is pushed towards users (and its impact on young people in particular). Most popular social sites today blindly push recommended posts to users based on their search/browsing habits, and no sense checks are made regarding the actual content. So, if one searches for fake proof that the US election was compromised, the content they'd subsequently be recommended later down the line will follow the same vein. It's therefore easy to explain how if young people obsessively seek Instagram posts that represent their dream body type, it is harmful that the posts they'd consequently see every day in the recommended section will be the same. No matter how many extra "clicks" Facebook corporation gains due to this hype following culture, a rationalisation of the algorithm is needed to find a balance between the overload of content and the provision of relevance to keep users hooked. This should involve checks to gauge the legitimacy and potential harmfulness of user searches and preferences, in order to prevent services recommending posts that only add fuel to the proverbial fire.


Most of the recommendations I've made in this blog involve massive dedications of manpower and resources, so the hesitancy of any social media chief to apply such measures is understandable. In my eyes, this only enhances the case for state intervention in favour of user safety and the rationalisation of content in the social space. If not now, then when?


Author: My name is Amer, I'm a Computer Science with Business graduate currently working in tech consulting. My thoughts in this blog are based on my opinions regarding the regulation of social media, rather than concrete proposals based on research. Feel free to reach out to me via LinkedIn (on the about page) if you have any questions.


Sources:



Comentarios


Post: Blog2 Post
  • LinkedIn
  • Instagram

©2021 by Amer's Blog

bottom of page