Deepfake: the threat becomes clearer

The shadow of deepfakes, these contents rigged for manipulation, hovers today on our daily lives. While technology is still in its infancy, the response is getting organized.

The shadow of deepfake now hangs over our daily lives. While some states have already reacted by adopting dedicated legislation, like California, this does not prevent this fraudulent technology from appearing in the public debate, including distorting political speeches.

Although still in its infancy, deepfake technology is already attracting attention from authorities around the world. As a reminder, this technology relies on the use of machine learning algorithms and artificial intelligence to manipulate video or audio content for fraudulent purposes. If the use of deepfakes is sometimes limited to satire, for example to make fun of government officials and the current political landscape, it can sometimes hide less avowable goals when this technology is used to generate political propaganda able to deceive the general public.

“The emergence of synthetic media and deepfakes forces us to an important and troubling realization: our historical conviction that video and audio are reliable recordings of reality is no longer defensible,” warned early in the morning. week the chairman of the company Deeptrace cybersecurity, Giorgio Patrini, brushing in passing a blackboard of the evolution of Deepfake in the public space in 2019 .

Pornography, a privileged playground for forgers of the web

Beyond the only political debate, the deepfakes have also extended to pornography, with the aim of ruining the reputation of a person previously targeted. A recent study on this practice revealed that women make up the majority of victims in deepfake porn, also known as “involuntary porn”. In these cases, their faces are inserted into existing pornographic content, with consequences similar to those that can cause the “revenge porn”, touching his victim but also his friends and his professional career.

Yet little known for its activism on this issue, the online discussion site Reddit, where the term deepfake was coined for the first time, banned in early 2018 porn deepfake, which relies heavily on networks of generative opposition for insert a woman’s face into existing pornographic material. Despite the influence of Reddit on the Internet culture, the ban did not prevent the emergence of porn deepfake. New apps like DeepNude – an application that quickly undresses a woman in an image – have appeared since then.

According to Deeptrace, deepfake videos exploded last year, from 8,000 in December 2018 to 14,678 today. As you would expect on the Internet, almost all of the listed content belongs to the genre of pornography, which accounts for 96% of the fake videos found online. These fake videos have been seen 134 million times, especially in Anglo-Saxon countries, unfortunately in the forefront on the subject. If the numbers delivered by Deeptrace suggest that deepfake porn is still in its infancy, it is growing so fast.

If 99% of the fake videos target famous women working in the entertainment sector, the generalization of deepfake technology may soon lead the counterfeiters to attack everyone. “As the generative technologies that support deepfakes become more and more monetizable, it is likely that more individuals will be targeted in the future,” said Deeptrace researcher Henry Ajder, interviewed by ZDNet. . “This process of commodification of deepfakes will make technology much more accessible to counterfeiters and will almost certainly increase the amount of counterfeits produced,” he said. Especially as more and more consumer applications,

The political debate increasingly polluted by deepfakes

“In view of AI’s progress, deepfakes can be expected to become better, cheaper and easier to manufacture in a relatively short period of time.Governments should invest in capacity development evaluating and measuring technologies to help them keep pace with the broader development of AI and to help them better prepare for the impacts of technologies like this, “said OpenAI’s Director of Public Affairs, Jack Clark, while the political debate could soon replace pornography as a new favorite sector for counterfeiters.

Because deepfakes can also cause irreparable damage to public debate, although there are currently few cases in which the use of deepfake has influenced a political result. The only two cases cited in the Deeptrace report occurred in Gabon and Malaysia. The case of Malaysia resulted in the production of a sexual video involving a minister, while the incident in Gabon concerned a video broadcast by the government of its president Ali Bongo Ondimba after he was a victim of an attack. 

“I would say that deep counterfeiting, cybersecurity, and political applications pose very serious short-term risks that we have to prepare for now, even if they are not devastating at the moment, as some people think. perhaps, “said Henry Aider, justifying the alarmist speeches of his colleagues on the subject.

Like Paul Scharre, an artificial intelligence specialist who serves as technical director at the Center for a New American Security (CNAS). “We have already seen the so-called” shallowfakes “[ rigged videos, Ed ] circulated online in order to distort political discourse or delegitimize politicians. The high quality video manipulated by artificial intelligence significantly increases It’s only a matter of time before we use reduit to try to manipulate the elections, “he said last August, who saw in the emergence of the deepfake a a major threat to democracy.

Deepfakes also affect audio and text

If video manipulation is a risk to take into account, audio manipulation is too often forgotten . Last September, a German company was scammed by the president of a new genre, the attackers having synthesized the voice of the leader to pretend to be him. They managed to steal more than $ 240,000 from the company. The phone call was directed to a subsidiary of the group and the synthesized voice proved compelling enough for employees to make a transfer of $ 243,000 to an external account controlled by the attackers. The latter pretended to be a Hungarian supplier, and managed to recover the money. 

Certaines applications sont aujourd’hui capables de synthétiser une voix à partir d’une suite d’extraits audio fournis : c’est notamment le cas de la société Lyrebird, qui propose un service de synthetisation de voix de ce type. Calibré pour un interlocuteur anglais, ce service permet de créer un avatar audio de votre voix, capable ensuite de prononcer ce qui vous arrange en lisant simplement le texte qu’on lui fournit.

But the deepfake does not just touch the video or the audio. Text content can also be exploited by counterfeiters. The OpenAI research group, which works on artificial intelligence technologies, has presented a new text generator called GPT2 . It is based on a neural network driven by a corpus of 40 GB of texts gleaned on the net. The program is perfectly capable of reproducing coherent texts that could be written by real human beings. A good example of the power of this model is the Subreddit Simulator GPT2, a sub-forum of reddit where all comments are written by GPT2 to simulate real conversations between users.

Leave a Comment