Austria wants to fight deepfakes as the technology is used more and more –

The Austrian government published on Wednesday (25 May) an action plan to combat “deep fakes”, with the aim of better combating misinformation and hate speech. Several pieces of legislation also aim to tackle this growing problem at European level.

The rise of digital, which has been accelerated by the pandemic, is leading to a rapid increase in “deep fakes”an artificial intelligence (AI)-powered type of media content that depicts someone doing or saying things that never actually happened.

This is a “considerable risk to security policy, as identification of artificial influence is difficult to prove or trace”Austrian Interior Minister Gerhard Karner said at a press conference on Wednesday.

Already at the end of 2020, an inter-ministerial task force was launched on the issue, involving the Austrian Federal Chancellery, the Ministry of Justice, the Ministry of Defense and the Ministry of Foreign Affairs.

This task force looked into the subject, which led to the publication of the action plan, which provides for four areas of action: “Structures and Processes”, “Governance”, “Research and Development”, and ” International cooperation “. Awareness of the subject is broadened and strengthened among the population.

The Austrian government has pointed out that the regulation of videos deepfakes must take into account existing fundamental and human rights, and that particular attention must be given to the special protection of freedom of expression and artistic freedom.

“Deepfakes are used to manipulate public opinion and democratic processes, or to target individuals in hateful ways on the Net”said Justice Minister Alma Zadic.

The potential of deepfakes

The Austrian Parliament assumes that deepfakes are published every day, because it is no longer necessary to have sophisticated software to create them. Professor Hany Farid of the University of Berkeley even predicts that within three to five years, it will no longer be possible to distinguish the fakes from the real ones.

If all deepfakes are not malicious in nature, they can also be used for non-hostile purposes, such as satire, and the majority of them are used to damage the reputation of people through fake defamatory porn videos.

In a report by Dutch startup Sensity, such use constitutes more than 90% of cases, and the number of videos generated doubled every six months between the end of 2018 and 2020 alone.

In addition to pornographic material, which affects women more, deepfakes can also be dangerous in political life. In March 2022, a manipulated video of Ukrainian President Volodymyr Zelensky circulated, in which Mr. Zelensky appeared to be asking the Ukrainian military to surrender.

A 2021 study by the Panel for the Future of Science and Technology found that “the risks associated with deepfakes can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level”.

The study therefore recommended that public authorities prevent and combat the negative effects of this technology and integrate solutions into their legislative frameworks.

European legislation to fight against deepfakes

In response, several European legislative dossiers address the issue of deepfakessuch as the Digital Services Act (Digital Services ActDSA) and the Artificial Intelligence (AI) Act.

The European Parliament’s draft report on the AI ​​law of 20 April highlighted “the emergence of a new generation of digitally manipulated content, also known as deepfakes”. Due to their potential for fraud, deepfakes should be subject to both transparency requirements and compliance requirements for high-risk AI systems.

That doesn’t mean high-risk AI systems are banned, but compliance “makes these systems more trustworthy and more likely to succeed in the European market”emphasize the co-rapporteurs.

In addition, the DSA provides for the obligation for very large online platforms to carry out a risk assessment, in particular the intentional manipulation of their services, which could have a negative impact on the protection of public health, minors, the discourse of civil society, elections, security, etc.

Patrick Breyer, MEP for the German Pirate Party and DSA rapporteur, told EURACTIV that while these non-binding recitals can help as countermeasures, in cases of intentional manipulation, “human intelligence is necessary”.

Since it might be impossible to decide whether we are dealing with a deepfakethe detection of such creations will be a matter of “media and counter-research skills” in the future, Mr. Breyer said.

Leave a Comment