Tuesday, 1 Oct 2024

Adrian Weckler: 'TV watchdog faces uphill struggle to rein in web giants'

The Broadcasting Authority of Ireland says that it wants to become Ireland’s new safety regulator for harmful content on social media.

If it becomes this entity, what might it actually do? Crack down on child abuse imagery? Dam the flood of pornography on social media platforms?

Or would it be a light-touch regulator that checks in every now and again, saying certain matters “need to be rectified”?

Unfortunately, an 80-page document published by the BAI on this leaves us without many specific proposals.

To be fair, any new regulatory agency must wait on legislation spelling out circumstances and offences.

And sources say the Government isn’t yet sure that the entity built to oversee RTÉ and radio stations is the right body to do it.

But Minister Richard Bruton has to abide by a European directive that requires more oversight of online video platforms. And he has also committed to setting up an Online Safety Commissioner.

So the BAI is, in theory, in the running to get some sort of expanded role.

But would a body that often insists that there should be no difference between rules governing a local radio station and a global social network even know where to begin?

The nature and size of the task is one of the most obvious challenges. About a billion hours of video are viewed every day on YouTube – just one of the platforms to be regulated.

Facebook has hired more than 10,000 people in the last two years just to look for harmful content.

The Irish Data Protection Commissioner has around 150 people, and senior officers there still admit that it sometimes only scratches the surface of the task it has.

But the BAI has around 35 people, all of whom have their own current duties.

Could a handful of extra staff regulate the unfathomably vast expanse of the internet?

The BAI’s response to this is that it would act as a rule-setter and then dip in to see how things were going.

“To resolve matters of scale and numbers of users on Irish video-sharing platform services, the BAI proposes that the regulator should principally work at a macro level,” said a BAI spokesman in response to detailed questions from this newspaper.

“This means that the role of the regulator would be to make very important regulatory decisions that affect large numbers of users simultaneously, such as by drafting codes, and then by assessing on a regular and ongoing basis the measures put in place by video-sharing platform services to the code provisions, such as in relation to age-appropriate content.”

One of the most controversial issues around content regulation is age verification.

Earlier this year, the chairperson of an Oireachtas committee suggested that Irish citizens might be required to upload passports or PPS numbers to Facebook to verify their age or identity. Hildegarde Naughton TD quickly withdrew the suggestion after a negative public reaction – but it is a proposal that has also been made by other TDs.

“The new [EU] Directive requires [online video platforms] to establish and operate a genuinely effective age verification mechanism on the service to prevent minors from viewing videos which may impair their physical, mental or moral development,” said the BAI spokesman.

Will the BAI also be looking for our passports if we want to check our family’s Facebook update or post an Instagram selfie? Will a public services card become a required document to use Twitter?

On one issue, the BAI appears to have taken a controversial stance: that WhatsApp and Apple’s iMessage should have their encryption security compromised so they can be monitored for offensive or harmful content.

“The online safety regulator should be able to issue harmful online content notices to both open services such as social media platforms and encrypted online services such as private messaging,” says the BAI proposal.

This would also seem to mean the text messages of Vodafone, Three and Eir users.

All of this is happening as Facebook, Google and Twitter say they need more clarity on what the Government regards as unacceptable speech online.

“It’s important for governments to draw clear lines between legal and illegal speech, based on evidence of harm and consistent with norms of democratic accountability and international human rights.

“Without clear definitions, there is a risk of arbitrary or opaque enforcement that limits access to legitimate information,” said Kent Walker, Google’s vice president for global affairs.

Source: Read Full Article

Related Posts