In may of this same year, the democrat Nancy Pelosi, speaker of the u.s. House of Representatives, gave a lecture at the Center of American Progress. A few days later, a few images of him speaking at that conference with the appearance of being affected by some type of substance, became viral. Even the president, Donald Trump, couldn’t resist spreading them.

In any way with this video you will be welcomed to another dimension never seen the disinformation policy, a video is handled that seems real but is not, what has come to be baptized as a deepfake.

A deepfake is a type of mediafake, a video of a person doing something that’s never been done or even that he has never said. There are several types of mediafakes, being a deepkafe the most sophisticated of all: a computational model based on technology deep learning (artificial intelligence) whose images have been mathematically generated via algorithms from photos and videos of the person that you want to recreate.

Although the first deepfakes were created about five years ago, the term was not coined until 2017 in the community Reddit, becoming popular gradually since then. With the advancement and progression especially in the field of artificial intelligence, it is not until this year where for the first time, there is a scale important in the dissemination of videos with images that, at first glance, we may not be able to discern their authenticity.

We are faced with the dissemination of videos with images that, at first glance, we may not be able to discern their authenticity

The unstoppable democratization of the technology has allowed any person with access to any program or editing app will be able to manipulate or alter a photo or video. But the deepfakes go up a step more in this kind of manipulation by the perfection attained, and may become in addition, in a dangerous political weapon used to manipulate public opinion or to destabilize democratic systems, but also a dangerous instrument that may affect the integrity of other types of profiles more anonymous: as a weapon of revenge porn or a malicious use in the field of business.

For this reason, the emergence to scale of this technology, in addition to alarming, poses challenges relevant to all. It is a new way of handling unknown that we will face. As if a game is involved, from now on, nothing of what we see in networks we may count it to be true. A new reality for the industry of the information.

The concern is important, especially for the giant tech. That is why some of them have already beginning to take action and are releasing tools that enable development of applications to combat what they understand can be your next problem.

Google has released some of their datasets deepfakes that they themselves have built so that researchers could design tools for its detection. With this same purpose Facebook, along with Microsoft, MIT, Berkeley and other academic institutions, has just launched a ‘Deepfake Detection Challenge (DFDC)’. In addition to private companies, the u.s. DARPA through its program MediFor (Media Forensics) also supports a number of initiatives how to combat this new wave of synthetic images.

despite the implied threat of confusion of reality and state of disinformation is widespread, there is a good news and that is that not everyone can access, in time, to this type of technology, since it requires knowledge of computational hiperespecializados. Are only scientists and data experts in deep learning which are capable of generating algorithms that create them.

But we must not forget that the global scientific community distributes the knowledge at conferences and online platforms where they disclose progress openly. In this regard, according to Deeptrace, in 2018 were presented 902 papers with advances in GAN (Generative Adversarial Networks), technology generated in 2014 and key in the generation of deepfakes, compared to approximately 100 in 2016, evidencing as well a unstoppable growing interest.

In 2018, had, according to Diffbot, 720.325 qualified professionals in the world with knowledge and skills in artificial intelligence (in the united States slightly more than 30%), people who will build our future, and we don’t know if with the degree of responsibility that this requires, despite the good intentions.

therefore, in spite of the hiperespecialización required, you never know which team you may be working for the mission wrong. It becomes more important than ever, as well, the need to create universal ethical principles and codes of conduct, standardized, regardless of the organization in which scientists carry out their work.

It might go away our ability to discern whether it is truth what we see. The deepfakes are just some of the risks associated with the vortex technology current. We will be more distópicos, but someone will know to take advantage of it, or in the words of George Orwell, “control of reality, what they called”.

Sonia Pacheco is the director of the congress DES | Digital Enterprise Show.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

2  +  2  =