certified fool - here for kicks and giggles - Sering berkicau di X @adiksi_fiksi

Weapon of Mass Confusion

A Geofany

4 min read

On February 12, a retweet containing a video passed my timeline. The content is a video that looks like a Joe Rogan Podcast and a normal video at first glance. However the caption of the video expresses concern about how this video is valid. At the second glance I realized that was a deepfake video talking about male enhancement supplement. 

Deepfake is synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.

In the video, Joe Rogan is seen discussing with Andrew D. Huberman about the issue. According to Mashable, a UK digital media company, the video has been circulating on the Tiktok application for a long time, which was uploaded by an account, @mikesmithtrainer. Huberman, through his Twitter account, clarified that he had never endorsed the supplement.

The video has been taken down from Tiktok and Twitter. However, this still sparks discussion about how video synthesis technology with the help of Artificial Intelligence has been misused.

I then pondered about how Indonesian netizens react to it? Could they identify a deepfake from a genuine video? What is the implication if this technology can create a deepfake so real and smooth that it surpasses our capability to identify it as a fake one?

Deepfake Is Getting Better & Cheaper Now

Video synthesis technology has been around for decades. More than 25 years ago a technology developed by Christoph Bergler and others was able to manipulate speech by changing mouth movements. This process has continued to develop until now with emergence of and assistance by Artificial Intelligence (AI) deep learning. This development has allowed the result of a video synthesis to be more realistic and believable.

Last year Metaphysic, a company that develops video synthesis tech, performed in America’s  Got Talent (AGT). They manage to deepfake a group of performers and change their face. They deepfake their performers to be Elvis Presley, Simon Cowell, Heidi Klum and Sofia Vergara live on stage.

That act got standing applauses from audience and the jury. Simon Cowell even commented how he got blown away because of their range of acting expressions. He said that the act is the most incredible and original show in AGT. 

Deepfakes are not only used for entertainment. Last year MBN, a TV station from Korea, started broadcasting news using this technology. MBN launched a news program with news anchor Kim Joo Ha synthesized using deepfake technology. The use of this technology allows a broadcasting of breaking news even though Kim Ju Ha is unable to broadcast it live.

The use of deepfakes can also cut production costs and reduce the need for production personnel while increasing  the inclusiveness of information and entertainment. Deepfakes allows producers to change the language of the desired media. 

Today’s deepfakes may look like the state of the art technology, but believe me this technology would look primitive in a few years. Nowadays deepfake still gets that uncanny valley feeling and the movement is still stiff but with the development of deep learning AI it will be better, faster and cheaper in no time.

Deepfake use is not all sunshine and rainbows though. Diakopoulos and Johnshon (2020) said in their journal that deepfake could pose a threat to at least the subject being deepfaked and even the integrity of formal institutions (in their case democracy).

More than in an election context, deepfake could and has been misused. For instance using deepfake to endorse without consent.

In accordance with what we have discussed deepfake can create anything as long as it has a capable database. Anyone’s face and voice can be collected and then taught to AI to synthesize. A concrete example is Joe Rogan’s video above. This can obviously be detrimental to us as viewers because we were deceived by the ad, harmed Joe Rogan because of the use of his face and voice without his consent. Tiktok brand was also being damaged because it was recorded in the platform for spreading malicious fake information.

In a more problematic case is the use of deepfakes in pornography. Who has forgotten the case of Atrioc, a streamer who was caught watching his own friend’s deepfake porn videos. Sure he apologized and the company that made the video shut down the website, but I think this is just the tip of the iceberg.

Deepfake is a double-edged knife. In one hand it can improve the quality of entertainment and improve inclusivity of information broadcasting, but if a deepfake technology falls into  the hands of a malicious actor it can become a weapon of mass confusion to injure various parties, including the public. Just imagine what happens if deepfakes are used to falsify evidence for the court, or what if deepfakes were used to falsify online loan application requirements, or how deepfakes were used to falsify statements from political figures. The possibilities seem endless.

Deepfake & Indonesian Netizens

Now let’s go back to our home, because sooner or later this technology will be used here. Do we have the capability to identify a deepfake?

Based on a survey conducted by Kominfo and Katadata, Indonesian netizens seek information from three main sources: social media, television and online news. The survey finds that 72.6 percent of the respondents get their information from social media, 60 percent from television and 27.5 percent from online news.

One thing from the survey that could become a major concern is the significant increase  in consumption of short videos (under 4 minute-length). Tiktok, Instagram, and YouTube provide platforms that are very easy to access and more or less addicting. Tiktok itself has doubled its growth this year compared to last year.

My concern is that short videos — in most cases by throwing away the context —  are easier and cheaper to produce. Such fact allows malicious actors to easily create convincing fake news by employing deepfakes technology. And please, don’t mention buzzers or Andrew Tate scheme, deepfake is already bad enough.

Although 60% of respondents from the survey above were able to sort out trusted online news sources. There are still 40% of them who still consume news written by anonymous and read articles full of unsubstantiated facts and having unclear sources.

Well, there is a positive development, though, from the world of social media, last year. Facebook and Whatsapp, which in 2020 had become hotbeds for spreading hoax news, last year there was a significant decrease in the spread of hoaxes within these two platforms.

To make matter worse, huge majority of those surveyed have doubts or are not sure of their ability to tell  if they have received or used false or valid information, with only 32 percent saying that they are sure they have the ability.

This inability could worsen the mass confusion by the increasing the quality and sophistication of hoax information — such as the use of deepfakes. Just imagine that sooner than later we lose the ability to tell right from wrong information. All videos have the potential to be faked.  We think, for instance, that it’s fake but then in reality it’s a valid information.  All day long we waste ourselves wondering,  to quote Queen’s Bohemian Rhapsody, is it a real life or it’s just fantasy?

Just imagine also for a moment, another pandemic with deepfake. How hard is it to fight against false and misguided information that floods the internet during the pandemic? Now that information could be injected with a steroid that is deepfake. It’s like we fight against pan-resistant Pseudomonas but in another form. 

That is a scenario that keeps me awake at night.

A Geofany
A Geofany certified fool - here for kicks and giggles - Sering berkicau di X @adiksi_fiksi

Leave a Reply

Your email address will not be published. Required fields are marked *

Dapatkan tulisan-tulisan menarik setiap saat dengan berlangganan melalalui email