A Binary Dilemma: Deciding what Online Content to Allow and What to Take Down

 In this post I will have a look at what has been happening in the debate concerning misinformation and disinformation in the web sphere since my last post: Code of Practice on Disinformation. A Comparative Analysis: Methodological Limitations

The public debate between Thierry Breton, the Commissioner [Kommissar] for Internal Market of the European Union (EU) and Mr. Musk’s X (formally Twitter) web platform revolves around the degree to which X is fulfilling its obligations under the EU Digital Services Act (DSA). The manner in which the EU and the press has reported this controversy manifests in the narrative that X is not doing enough to combat disinformation and is not meeting these obligations. As a result, the EU has been forced to take further investigative action in this regard.

This narrative is misleading in that not much is being said about what X is doing to balance the moral and democratic value of free expression with the need to combat harmful commentary posted on its online platform. X’s actions seem to fall on deaf ears at the EU and in most of the news media. The prevailing attitude is that whatever X does is not enough and even disingenuous.

This critical narrative was bolstered when the war between Israel and Hamas broke out following Hamas’s attack into southern Israel on October 7, 2023.

On October 10, 2023, Dan Milmo, Global technology editor for The Guardian reported that:

“X’s handling of the Israel-Hamas conflict has come under scrutiny after a “deluge” of fake posts …. from accounts that have made false claims or antisemitic comments.”

Mr. Milmo wrote:

“Fake social media accounts are spreading false information about the Israel-Hamas conflict, with X and TikTok among the affected platforms, according to disinformation specialists.” Mr. Milmo cited an Israeli “Social Threat Intelligence Company” called Cyabra as reporting that:

“One in five social media accounts participating in online conversations about the Hamas attacks and their aftermath are fake…. 30,000 fake accounts have been spreading pro-Hamas disinformation or gathering sensitive details about their targets.” Mr. Milmo did not explain how Cyabra derived these numbers.

On October 12, 2023, John Jeffay writing in Israel21c reported how Cyabra derived these statistics. This was done using “…its machine learning software to analyze two million posts, pictures, and videos in the immediate aftermath of the attacks.” Mr. Jeffay explained that these fake accounts “…. were ‘bots’ set up to generate and spread a mix of half-truths, outright lies and distortions.”

On October 10, 2023, Kommissar Breton, communicated a missive, under the European Commission (EC) letterhead. He began his letter with the following remarks:

“Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU.”

Kommissar Breton reminded Mr. Musk of the “precise obligations regarding content moderation” under the DSA. Kommissar Breton also put forward three action points for Mr. Musk to take on board.

“First, you need to be very transparent and clear on what content is permitted under your terms and consistently and diligently enforce your own policies…. Second, when you receive notices of illegal content in the EU, you must be timely, diligent and objective in taking action and removing the relevant content when warranted…. Third, you need have in place proportionate and effective mitigation measures to tackle the risks to public security and civic discourse stemming from disinformation.”

Kommissar Breton detailed the alleged failings of X such as the circulation of “…. fake and manipulated images and facts circulating on your platform in the EU, such as repurposed old images of unrelated armed conflicts or military footage that actually originated from video games.”

He also flagged the X’s newly-changed public interest policy, saying that the change “…. left many European users uncertain about what type of content the platform allows.”

Kommissar Breton concluded his missive with this ultimatum and warning:

“I urge you to ensure a prompt, accurate and complete response to this request within the next 24 hours. We will include your answer in our assessment file on your compliance with the DSA. I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed.”

Mr. Musk, responded on October 11, 2023, on X writing “Our policy is that everything is open source and transparent, an approach that I know the EU supports,”. He then invited Kommissar Breton to “please list the violations you allude to so that the public can see them…. Merci beaucoup”.

X has not been idle in terms of measures to combat fake and misleading content. 

On October 12, 2023, the Australian Broadcasting Corporation (ABC) reported what the CEO of X, Linda Yaccarino said, in defense of the actions taken by X:

“…. the social media platform had removed hundreds of Hamas-affiliated accounts and taken action to remove or label tens of thousands of pieces of content since the militant group's attack on Israel,”.

On October 18, 2023, X @support announced its “Not a Bot” program.

X @support explained this program as follows:

“New, unverified accounts will be required to sign up for a $1 annual subscription to be able to post & interact with other posts. Within this test, existing users are not affected.”

The Program was released as a Beta pilot program in New Zealand and the Philippines.

On October 18, 2023, Roger Montti writing in the Search Engine Journal reported that new X subscribers will have to pay for the following services:

·         Bookmark posts

·         Like posts

·         Post content

·         Reply to posts

·         Repost/Quote posts by other accounts

New users who opt not to subscribe will be restricted to the following functionality:

·         Follow other members

·         Read tweets

·         Watch videos

According to Mr. Montti, X users have questioned the utility of the Not a Bot Program, pointing out that out that bot accounts and scammers already pay for the Blue Check mark to gain access. Contrary to this view other users have pointed out that X verification and payment processes may present a hurdle to bot farmers thus discouraging them.

Mr. Montti concluded with the following question:

“Will this truly work? Will be interesting to see if it does. The bigger question is whether X will roll this out globally if the test is deemed a success.”

X has also been taking measures by changing the fact checking and blue check functionality. On October 18, 2023, Karissa Bell, Senior Editor of engadget reported that:

“X is making a significant change to its crowd-sourced fact checking tool in an attempt to stem the flow of misinformation on its platform. The new rule is one that will be familiar to professional fact checkers, academics and Wikipedia editors, but is nonetheless new to X’s approach to fact-checking: the company will now require its volunteer contributors to include sources on every community note they write.”

Lisa O’Carroll in an article published in The Guardian, on October 19, 2023, reiterates X’s suspected deviations. Ms. O’Carroll did not mention X’s Not a Bot program, or the changes made to the X community notes requiring fact checkers to specify their sources.

Ms. O’Carroll concentrates our attention to Kommissar Breton’s charge regarding X’s deceptive user interface in terms of how the blue check marks work on the X platform. Previously blue check functionality was only available to “…. to verified users in the public eye, including ministers and celebrities.” The policy now is to provide the functionality to X subscribers that pay to have access to the blue check function.

The EU investigation will try and determine the possibility that searches on X will deceive the X user base, when viewing the content, spread by blue tick accounts. Users when viewing the content “…. might mistake….” Such material as coming “….  from verified sources in the pre-Musk service.”

Also, according to Ms. O’Carroll the EU has concerns with the degree to which X complies with European languages citing “reports” that X only has one moderator in the Netherlands.

The EU will also look into the “…. the effectiveness of X’s “community notes”, which allow the public to comment on the veracity or legality of posts.”

Regarding how things develop Ms. O’Carroll reported that any actions and investigations undertaken by the EU are not subject to a particular timeline and the EU says that such actions “…. would take as long as it takes but it could apply unspecified” interim measures before the investigation concluded if appropriate.”

Regarding the details of the proceedings these would focus on the functioning of the notice and action mechanism for illegal content, which involves legal orders from police or other authorities in the EU to take down content within one hour.”

As well as the above measures, X published its DSA Transparency Report in November 2023. This document has received limited attention in the media. It affirms X’s commitment to transparency and details its content moderation practices and enforcement activities.

“This report covers the content moderation activities of X’s international entity Twitter International Unlimited Company (TIUC) under the Digital Services Act (DSA), during the date range August 28, 2023 to October 20, 2023.”

X’s commitment to freedom of expression is reiterated as follows:

“X was founded on a commitment to transparency. We also want people on X to feel they are able to freely express themselves, while also ensuring that conversations on X are safe, legal and unregretted. When you think about some of the world’s most powerful moments, movements, and memes, they prevailed because people had a place to express their ideas, challenge conventional norms, and demand better. That’s why free expression matters.”

X emphasised that “free expression” can “coexist” with “platform safety”.

With respect to the notion of free speech X points out its objective as being “…. reflective of real conversations happening in the world,” even if many may regard some conversations as “…. offensive, controversial, and/or narrow-minded….”

X emphasises that it “…. welcomes everyone to express themselves” but “…. will not tolerate behaviour that harasses, threatens, dehumanises or uses fear to silence the voices of others.”

The Transparency Report refers us to the Twitter International Unlimited Company (TIUC) terms of Service and Rules as a means in achieving the aspiration to “…. ensure everyone feels safe expressing themselves”. These terms are continuously reviewed by X.

In the Report X states its commitment “…. to fair, informative, responsive, and accountable enforcement.”  This is despite X’s admission that they have often been caught “in a binary paradigm of whether to leave content up, or take it down.”

X is also cognisant of the responsibility associated with enforcement because the “…. risks of getting it wrong at the extremes are great”.  This arises from the very nature of trying to reconcile the dilemma involved in leaving “…. up content that’s really dangerous….” without running the “…. the risk of censorship”. These difficulties are expressed as follows:

“Our point is: if you do either, you need to be right. And we live in a world with many shades of grey.”

Furthermore, X is working “…. to remove dangerous and illegal content and accounts. ….”  X, also in responding to reports of illegal material does act “on content that violates local laws…. “ . By the same token X emphasises that their experience has shown “…. that there are other types of content where a range of potential reasonable, proportionate, and effective approaches, that also seek to balance fundamental rights, can be appropriate.”

The Transparency Report explains the three-pronged approach taken by X:

“You can think about how we moderate on X in three buckets: content and accounts that remain, are restricted, and are removed.”

Regarding the “remain” bucket the key point made is that the vast majority of content is “healthy” in that it meets the standards specified in the TIUC “…. meaning it does not violate our TIUC Terms of Service and Rules or our policies such as Hateful Conduct, Abuse & Harassment, and more.” Nonetheless, a post may not violate a policy but there still could be people who are offended by it.

The “restrict” bucket implements X’s “Freedom of Speech, Not Reach enforcement philosophy”. Content seen as potentially violating X’s policies in the sense that it is “…. awful, but lawful—we restrict the reach of posts by making the content less discoverable, and we’re making this action more transparent to everyone. “

In a post published on April 17, 2023, X describes this approach as follows:

“Restricting the reach of Tweets, also known as visibility filtering, is one of our existing enforcement actions that allows us to move beyond the binary “leave up versus take down” approach to content moderation.”

The Transparency Report elaborates on this by pointing out that visibility filtering involves the application of a restricted reach label” on content. When applied X removes the possibility of engaging with the content such that “…. its reach is restricted to views occurring directly on the author's profile.”

The initial implementation of the restricted content approach was limited to Hateful Conduct”. More recently the approach has been extended to include …. Abuse & Harassment, Civic Integrity, and Violent Speech”.

When restricting content X has “…. a range of enforcement options for the variety of use cases….” One of these options is by temporarily switching an offending account to “read-only mode” effectively placing limitations on the capacity of an account to post and repost.

Regarding the removal of content access is withheld for content that is reported as being illegal with specific jurisdictions. Content considered to be “extremely harmful” such as “…. targeted violent threats, targeted harassment, or privacy violations….” is removed via account suspension and return to the platform being made conditional on the deletion of the offending content.

The Transparency Report states the efforts it has made to date:

“We've made significant progress towards improving the safeguards to protect our users and our platform, but we know that this critical work will never be done. X is committed to ensuring the safety and health of the platform and fulfilment of its DSA Compliance obligations through our continued investment in human and automated protections.”

Clearly, all of X’s efforts described above have not been persuasive enough for the EU.  On December 18, 2023, Kommissar Breton proclaimed formal infringement proceedings” against X for suspected:

·         Breaches of obligations to counter illegal content and disinformation

·         Breaches of Transparency obligations, and

·         Deceptive user interface design

Also, on December 18, 2023 the EC published a press release announcing that the is opening formal proceedings against X under the DSA that also provides a more detailed list of possible infringements.

Clearly not happy with the actions taken by X, over the previous several months the EC’s investigation will, to quote the Press Release in full focus on the following areas and concerns:

·         The compliance with the DSA obligations related to countering the dissemination of illegal content in the EU, notably in relation to the risk assessment and mitigation measures adopted by X to counter the dissemination of illegal content in the EU, as well as the functioning of the notice and action mechanism for illegal content in the EU mandated by the DSA, including in light of X's content moderation resources.

·         The effectiveness of measures taken to combat information manipulation on the platform, notably the effectiveness of X's so-called ‘Community Notes' system in the EU and the effectiveness of related policies mitigating risks to civic discourse and electoral processes.

·         The measures taken by X to increase the transparency of its platform. The investigation concerns suspected shortcomings in giving researchers access to X's publicly accessible data as mandated by Article 40 of the DSA, as well as shortcomings in X's ads repository.

·         A suspected deceptive design of the user interface, notably in relation to checkmarks linked to certain subscription products, the so-called Blue checks.

The EC links these concerns to specific articles in the DSA:

“If proven, these failures would constitute infringements of Articles 34(1), 34(2) and 35(1), 16(5) and 16(6), 25(1), 39 and 40(12) of the DSA. The Commission will now carry out an in-depth investigation as a matter of priority. The opening of formal infringement proceedings does not prejudge its outcome.”

Aljazeera, on December 18, 2023, reported X’s efforts to comply with the DSA quoting a statement issued by X:

“X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process,”

This Statement continued by emphasising the imperative that the “…. process remains free of political influence and follows the law”. Having made this point the Statement reinforced X’s commitment to “work tirelessly” to ensure freedom of expression within “…. a safe and inclusive environment for all users on our platform.”

In conclusion, X makes the obvious case that in a world where there are "shades of grey" it is not easy to reconcile freedom of expression with accountable law enforcement without "running the risk of censorship". This is something that media reporting does not stress when reporting the trials X is having with the EU vis-à-vis the DSA.

X has produced the following changes and programs to improve the transparency of the platform:

·         Not a bot program

·         Community Notes Fact checking changes

·         Blue check changes

·         Freedom of Speech, Not Reach enforcement philosophy

·         Transparency Report

The EC and Kommissar Breton would be aware of these measures but in their latest combative challenge downplay the changes made by X. Also, they do not acknowledge the point X is trying to make about the need to achieve a level of “proportionality” when it comes to monitoring and removing dangerous content and adopting effective approaches that protect fundamental rights. The pressure on X to justify its transparency is set to continue and the more pressure applied the more the spectre of political interference castes its dark shadow over online discourse within the EU.

The Australian Human Rights Commission, writing within the context of the debate in Australia concerning the Australian Federal Government’s proposed Communications Legislation Amendments Bill highlights the risks of providing government authorities with increased powers to combat misinformation and misinformation arguing, like X of the need to “…. be balanced with ensuring we don’t unduly affect freedom of expression.” The Human Rights Commission warned that:

“There are inherent dangers in allowing any one body – whether it be a government department or social media platform – to determine what is and is not censored content. The risk here is that efforts to combat misinformation and disinformation could be used to legitimise attempts to restrict public debate and censor unpopular opinions…. Striking the right balance between combating misinformation or disinformation and protecting freedom of expression is a challenge with no easy answer.”

 

References

Article 16, Notice and action mechanisms - the Digital Services Act (DSA)

Article 25, Online interface design and organisation - the Digital Services Act (DSA)

Article 34, Risk assessment - the Digital Services Act (DSA)

Article 35, Mitigation of risks - the Digital Services Act (DSA)

Article 39, Additional online advertising transparency - the Digital Services Act (DSA)

Article 40, Data access and scrutiny - the Digital Services Act (DSA)


 

 

 

Comments

Popular posts from this blog

Code of Practice on Disinformation. A Comparative Analysis: Methodological Limitations

Reflections on Bluntness and "Push Back' in International Discourse

A Discourse on Laurel and Hardy Statecraft