Table of contents:

Deep fakes: Neuroset was taught to generate sound and video fakes
Deep fakes: Neuroset was taught to generate sound and video fakes

Video: Deep fakes: Neuroset was taught to generate sound and video fakes

Video: Deep fakes: Neuroset was taught to generate sound and video fakes
Video: EcoFlow Wave PORTABLE Battery Powered Air Conditioner 4000 BTU 2024, November
Anonim

To create an “individual” news picture for any of us and to falsify the media reports selected in it, the efforts of one programmer are enough today. Specialists in artificial intelligence and cybersecurity told Izvestia about this.

More recently, they estimated that this required the work of multiple teams. Such acceleration has become possible with the development of technologies for attacks on neural networks and the generation of audio and video fakes using programs for creating “deep fakes”. The newspaper Izvestia was recently subjected to a similar attack, when three Libyan news portals at once published a message that allegedly appeared in one of the issues. According to experts, within 3-5 years we can expect an invasion of robotic manipulators, which will automatically be able to create a lot of fakes.

Brave new world

There are more and more projects that adjust the information picture to the perception of specific users. One example of their work was the recent action of three Libyan portals, which published a news allegedly published in the issue of Izvestia on November 20. The creators of the fake modified the front page of the newspaper, posting on it a message about the negotiations between Field Marshal Khalifa Haftar and the Prime Minister of the Government of National Accord (PNS) Fayez Sarraj. The fake, in Izvestia typeface, was accompanied by a photo of the two leaders taken in May 2017. The label with the publication's logo was cut from the actual published issue of November 20, and all other texts on the page from the October 23 issue.

From the point of view of specialists, in the foreseeable future, such falsifications can be done automatically.

The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture
The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture

“Artificial intelligence technologies are now completely open, and devices for receiving and processing data are miniaturizing and getting cheaper,” Yuri Vilsiter, Doctor of Physical and Mathematical Sciences, Professor of the Russian Academy of Sciences, head of the FSUE “GosNIIAS” department, told Izvestia. - Therefore, it is highly likely that in the near future, not even the state and large corporations, but simply private individuals will be able to eavesdrop and spy on us, as well as manipulate reality. In the coming years, it will be possible, by analyzing user preferences, to influence him through news feeds and very clever fakes.

According to Yuri Vilsiter, technologies that can be used for such an intervention in the mental environment already exist. In theory, the invasion of robotic bots can be expected in a few years, he said. A limiting point here may be the need to collect large databases of examples of real people's responses to artificial stimuli with tracking of long-term consequences. Such tracking will likely require several more years of research before targeted attacks are consistently obtained.

Vision attack

Alexey Parfentiev, head of the analytics department at SearchInform, also agrees with Yuri Vilsiter. According to him, experts already predict attacks on neural networks, although now there are practically no such examples.

- Researchers from Gartner believe that by 2022, 30% of all cyberattacks will be aimed at corrupting the data on which the neural network is trained and stealing ready-made machine learning models. Then, for example, unmanned vehicles can suddenly start mistaking pedestrians for other objects. And we will not talk about financial or reputational risk, but about the life and health of ordinary people, the expert believes.

Attacks on computer vision systems are being carried out as part of research now. The purpose of such attacks is to make the neural network detect something in the image that is not there. Or, conversely, not to see what was planned.

The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture
The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture

“One of the actively developing topics in the field of training neural networks is the so-called adversarial attacks (“adversarial attacks.”- Izvestia),” explained Vladislav Tushkanov, a web analyst at Kaspersky Lab. - In most cases, they are aimed at computer vision systems. To carry out such an attack, in most cases, you need to have full access to the neural network (the so-called white-box attacks) or the results of its work (black-box attacks). There are no methods that can deceive any computer vision system in 100% of cases. In addition, tools have already been created that allow you to test neural networks for resistance to adversarial attacks and increase their resistance.

In the course of such an attack, the attacker tries to somehow change the recognized image so that the neural network does not work. Often, noise is superimposed on the photo, similar to that which occurs when photographing in a poorly lit room. A person usually does not notice such interference well, but the neural network begins to malfunction. But in order to carry out such an attack, the attacker needs access to the algorithm.

According to Stanislav Ashmanov, General Director of Neuroset Ashmanov, there are currently no methods of dealing with this problem. In addition, this technology is available to anyone: an average programmer can use it by downloading the necessary open source software from the Github service.

The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture
The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture

- An attack on a neural network is a technique and algorithms for deceiving a neural network, which make it produce false results, and in fact, break it like a door lock, - Ashmanov believes. - For example, now it is quite easy to make the face recognition system think that it is not you, but Arnold Schwarzenegger in front of it - this is done by mixing additives imperceptible to the human eye into the data coming to the neural network. The same attacks are possible for speech recognition and analysis systems.

The expert is sure that it will only get worse - these technologies have gone to the masses, fraudsters are already using them, and there is no protection against them. As there is no protection against the automated creation of video and audio forgeries.

Deep fakes

Deepfake technologies based on Deep Learning (technologies of deep learning of neural networks. - Izvestia) already pose a real threat. Video or audio fakes are created by editing or overlaying the faces of famous people who supposedly pronounce the necessary text and play the necessary role in the plot.

“Deepfake allows you to replace lip movements and human speech with video, which creates a feeling of realism of what is happening,” says Andrey Busargin, director of the department for innovative brand protection and intellectual property at Group-IB. - Fake celebrities “offer” users on social networks to participate in the drawing of valuable prizes (smartphones, cars, sums of money), etc. Links from such video publications often lead to fraudulent and phishing sites, where users are asked to enter personal information, including bank card details. Such schemes pose a threat to both ordinary users and public figures who are mentioned in commercials. This kind of scam starts to associate celebrity images with scams or advertised goods, and this is where we run into personal brand damage, he says.

The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture
The Truman Show - 2019: How Neural Networks Taught to Fake the News Picture

Another threat is associated with the use of "fake votes" for telephone fraud. For example, in Germany, cybercriminals used voice deepfake to make the head of a subsidiary from the UK urgently transfer € 220,000 in a telephone conversation, posing as a company manager.into the account of a Hungarian supplier. The head of the British firm suspected a catch when his "boss" asked for a second money order, but the call came from an Austrian number. By this time, the first tranche had already been transferred to an account in Hungary, from where the money was withdrawn to Mexico.

It turns out that current technologies allow you to create an individual news picture filled with fake news. Moreover, it will soon be possible to distinguish fakes from real video and audio only by hardware. According to experts, measures prohibiting the development of neural networks are unlikely to be effective. Therefore, soon we will live in a world in which it will be necessary to constantly recheck everything.

“We need to prepare for this, and this must be accepted,” emphasized Yuri Vilsiter. - Humanity is not the first time passing from one reality to another. Our world, way of life and values are radically different from the world in which our ancestors lived 60,000 years ago, 5,000 years ago, 2,000 years ago, and even 200-100 years ago. In the near future, a person will be largely deprived of privacy and therefore forced to hide nothing and act honestly. At the same time, nothing in the surrounding reality and in one's own personality can be taken on faith, everything will have to be questioned and constantly rechecked. But will this future reality be dire? No. It will simply be completely different.

Recommended: