Arguing The Debate Of Synthetic Media vs Security

Aditi Bainss
Generative AI
November 13, 2023
4 min read

Remember the viral videos on Instagram and TikTok where you'd cut a seemingly real-looking object - a pair of shoes, a sunflower, or even a garbage truck - with a knife, and it turned out to be cake? The anticipation of whether it's cake or fake garnered the attention of millions. That near-perfect resemblance to real-life objects is sort of what makes synthetic media such an exhilarating space. 

This resemblance also makes for a heated debate over whether or not synthetic media and artificial intelligence are the biggest threat to security that humankind has ever had to tackle.

The Prevalence of Synthetic Media Worries Many

Recently, synthetic media and generative AI have been revolutionizing the media and content space. They allow the creation of digital content that is more realistic, engaging, and interactive than ever before. From creating hyper-realistic digital versions of animals and objects, such as cars, buildings, and landscapes, to celebrities, politicians, and other public figures, it can pretty much take in a wide variety of data and generate new content.

Unfortunately, deepfake media - a colloquial and blanket term mistakenly used to include all types of synthetic media - has concerned many. Deepfakes are a type of synthetic media generated by manipulating a video to make it appear as if someone said or did something they didn't.

                                                                                                                                       Source: Shamook

The most obvious danger is that deepfakes can be used to spread false information. By creating a convincing video of a political leader or celebrity saying something they never said, deepfakes can be used to manipulate public opinion and cause confusion. This can lead to a breakdown of trust in our political and social systems, as people may not know what to believe. This, however, does not mean that all synthetic media are deepfakes. 

Read more: Deepfake vs Rephrase.ai videos

Setting Deepfakes Apart from Ethically-Produced Synthetic Media is Key

A crucial differentiator between deepfakes and ethically-produced synthetic media is, of course, consent. Synthetic videos generated by deep synthesis companies such as Rephrase.ai often employ facial reenactment of the person who has consented to their videos being generated. Deepfakes, on the other hand, involve a complete transfer of movements and expressions of one person to another without any consent from the person on the screen. That said, detecting deepfake and differentiating it from ethically-produced synthetic media is notoriously difficult. In fact, cementing what is ethical and what isn't can be tricky too.

Learn more: Assured by Rephrase.ai: Our Pillars of Certainty & Security

Case in point:

When the documentary about celebrity chef, author, and travel documentarian Anthony Bourdain called “Remembering Anthony: A Tribute to Anthony Bourdain” was released in 2019, it was met with a scathing controversy. The documentary featured Bourdain's computer-generated voice, created using audio from his past interviews and shows.

Many people felt this was wrong, as it was a form of cloning Bourdain's voice without his permission. They argued that it was a violation of Bourdain's rights and that it was disrespectful to his memory.

Others argued that the documentary was a fitting tribute to Bourdain and that the voice was meant to honor him. They argued that it was a way to keep his memory alive and that it was a way to ensure that his voice was heard by future generations.

Laws Around Cracking Down on Unethical Use of Synthetic Media Can Be Open To Interpretation

In Jan 2023, China followed through with its plan to ban deepfake tech. Deep synthesis service providers in China are now required to register with the government, clearly label their products as synthetic media, submit their products to the government for review before they can be released to the public, and monitor the content for any signs of malicious purposes, such as fraud or defamation. This is a first-of-its-kind regulation in history, especially when much-needed cybersecurity laws are largely amorphous.

However, since detecting deepfake is no child's play, the terms of such policies can be ambiguous, allowing a window for the government to access the company's code resulting in serious privacy concerns for the company's clients. Plus, such bans can also be used by governments to violate free speech and halt social and government critique expressions.

So Where Do we Go from Here?

By the end of 2023, an estimated 30 billion devices will be connected to the Internet. So there's no one-law-fits-all approach that will provide a blanket answer here. Accommodations for all sorts of anomalies and new occurrences will need to be made by the government in close collaboration with deep-tech synthetic media companies.

Plus, as is with any technology with such deep nuances, awareness via public education campaigns is critical. What might look like or sound like someone might not be. The idea, thus, is to have advanced training curricula on how to unlearn our Internet usage and consumption habits.

This is also a crucial point for Internet security leaders to figure out how to increase the cost of such deepfake attacks and put speed-cutting roadblocks in their way. While we're still in the formative years of generative AI and synthetic media, make no mistake that a new tech era is about to unfurl. And in this new online generation, cutting-edge research tools for deepfake detection and zero trust vigilance will be the foundation of a safe, ethical, and responsible digital age.

The illustration for this blog was generated using Midjourney.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.