Deepfake technology can seamlessly stitch anyone in the world into a video or photo they never actually participated in. Such capabilities have existed for decades, that’s how the late actor Paul Walker was resurrected for Fast & Furious 7. But it used to take entire studios full of experts a year to create these effects.

Deepfakes (a portmanteau of “deep learning” and “fake”) are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. The main ingredient in deepfakes is machine learning, which has made it possible to produce deepfakes much faster at a lower cost. To make a deepfake video of someone, a creator would first train a neural network on many hours of real video footage of the person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting. Then they’d combine the trained network with computer-graphics techniques to superimpose a copy of the person onto a different actor.

While the addition of AI makes the process faster than it ever would have been before, it still takes time for this process to yield a believable composite that places a person into an entirely fictional situation. The creator must also manually tweak many of the trained program’s parameters to avoid telltale blips and artifacts in the image. The process is hardly straightforward. Many people assume that a class of deep-learning algorithms called generative adversarial networks (GANs) will be the main engine of deepfakes development in the future.

In other hands, Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use. The ambiguity around these unconfirmed cases points to the biggest danger of deepfakes, whatever its current capabilities: the liar’s dividend, which is a fancy way of saying that the very existence of deepfakes provides cover for anyone to do anything they want, because they can dismiss any evidence of wrongdoing as a deepfake.

Maybe, the solution to this problem has to be driven by individuals until governments, technologists, or companies can find a solution. If there isn’t an immediate push for an answer, though, it could be too late. What we should all do is demand that the platforms that propagate this information be held accountable, that the government enforces efforts to ensure technology has enough positive use cases to outweigh the negatives, and that education ensures we know about deepfakes and have enough sense to not share them.


instagram.com/sahabatpare_lingua

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *