Surely at this point you’ve already encountered during your navigation on internet with some video where some person or public figure is doing things that you would not have imagined possible. We see the former u.s. president Barack Obama, undaunted by insulting Donald Trump, the actor Jim Carrey replacing Jack Nicholson in the movie The Shining, or dozens of similar examples, which are difficult to believe. But it is not that these people lost their reason and decided to do things out of the ordinary: these images unlikely are achieved by means of a technique known as Deepfake (or Ultrafalso, alternative term accepted by the Fundéu BBVA), with which you can create content extremely realistic and extremely fictitious.
In a nutshell, it is a technique of image synthesis, which allows you to take videos, photographs or even audio recordings for existing and combine them flawlessly, by means of artificial intelligence, with new faces, sounds or environments. And although it sounds fun, what is certain is that, of the innocent, has nothing.
This situation and its possible consequences has already attracted the attention of several institutions and research groups, as well as the large technology companies. To highlight how easy it could be to fall victim to this deception, PBS did a special report thereon in its NOVA series, where you can see several examples of fake videos:
What is the worst that could happen?
In the beginning, this technique was used to create pornography false, putting faces of celebrities or ex-partners in bodies of people having sex. Already sufficiently perplexing, but after that, the jump to politics was a natural one; a deepfake can easily be used to discredit or blackmail a public figure of high-profile, or to misinform and manipulate the population in general. In an era in which to fix elections would seem to be easier than ever to who has the right technology (or the money to pay for it), this is not something that should be taken lightly.
In less public forums the danger is just as present. The so-called deepfakes voice have cost some companies billions of dollars as a result of fraud in that, using the “voice” of the director of the company, criminals have ordered the transfer of huge sums of money to their accounts.
What is being done about it?
Facebook, the ever-present controversies in technology, has offered a monetary award for the person or group that is able to create an effective means of detecting deepfakes. This measure, known as Deepfake Detection Challenge, it is a partnership between Facebook, Microsoft, the Partnership on AI and academics from different institutions.
The project Deepfake Detection Challenge began operation in October 2019 and will last until may 2020, offering collaborations and awards to motivate participation. Facebook invested $10 million in this project.
Google, for its part, has created a database that contains more than 3,000 videos deepfake altered, to encourage the development of systems designed to detect modified content with artificial intelligence. This is key in the advancement of this technology, because until the moment there was not enough material for adequate study and, with the proximity of the next us presidential elections, time is of the essence.
What can you do?
If you observe carefully, the deepfakes might not be as perfect as you think. Perhaps the person blinks less than what I should, or perhaps the movements of the head, or the lighting doesn’t look at all natural. If something seems strange, follow your instinct.
However, it is a fact that technology progresses exponentially and, if they don’t pay attention, it seems to be near perfection. And we, after all, we are only human. For this reason, the common sense is the tool recommended in any situation. Don’t believe everything that you see, look for different sources to corroborate a fact, and it assumes that anything can be false until proven otherwise. It is possible that is.
Recommendations of the editor