Big tech told to identify AI deepfakes ahead of EU vote

1 month ago 123

The European Commission has issued a set aft of guidelines for digital giants to tackle risks to elections including disinformation. File

The European Commission has issued a set aft of guidelines for digital giants to tackle risks to elections including disinformation. File | Photo Credit: AFP

The EU called on Facebook, TikTok and other tech titans on March 26 to crack down on deepfakes and other AI-generated content by using clear labels ahead of Europe-wide polls in June.

The recommendation is part of a raft of guidelines published under a landmark content law by the European Commission for digital giants to tackle risks to elections including disinformation. The EU executive body has unleashed a string of measures to clamp down on big tech, especially regarding content moderation.

Its biggest tool is the Digital Services Act (DSA) under which the bloc has designated 22 digital platforms as "very large" including Instagram, Snapchat, YouTube and X.

There has been feverish excitement over artificial intelligence since OpenAI's ChatGPT arrived on the scene in late 2022, but the EU's concerns over the technology's harms have grown in parallel.

Brussels especially fears the impact of Russian "manipulation" and "disinformation" on elections taking place in the bloc's 27 member states on June 6-9.

In the new guidelines, the Commission said the largest platforms "should assess and mitigate specific risks linked to AI, for example by clearly labelling content generated by AI (such as deepfakes)".

It recommended that big platforms promote official information on elections and "reduce the monetisation and virality of content that threatens the integrity of electoral processes" to diminish any risks.

"With today's guidelines we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression," said the EU's top tech enforcer, Thierry Breton.

While the guidelines are not legally binding, platforms must explain what other "equally effective" measures they are taking to limit the risks if they do not adhere to them.

The EU can ask for more information and if regulators do not believe there is full compliance, they can hit the firms with probes that could lead to hefty fines.

'Trusted' information

Under the new guidelines, the Commission also said political advertising "should be clearly labelled as such" before a tougher law on the issue comes into force in 2025. It also urges platforms to have mechanisms "to reduce the impact of incidents that could have a significant effect on the election outcome or turnout". The EU will conduct "stress-tests" with relevant platforms in late April, it said.

X has already been under investigation since December over content moderation.

It pressed Facebook, Instagram, TikTok and four other platforms to provide more information on how they are countering AI risks to polls on March 14.

In the past few weeks, several of the companies including Meta have outlined their plans.

TikTok has announced more of the measures it was taking including push notifications from April that will direct users to find more "trusted and authoritative" information about the June vote.

TikTok has around 142 million monthly active users in the EU — and is increasingly used as a source of political information among young people.

Read Entire Article