Politics: Deep Fakes and the Truth (2024)

Politics: Deep Fakes and the Truth (1)

In the recently concluded election in Pakistan, the jailed former Prime Minister of Pakistan, Imran Khan, was sealed off by the military establishment from making any public appearance, even virtually. In fact, all symbols associated with his party, the PTI, were erased, including the highly popular party symbol of a cricket bat. The aim was to deny public exposure to the part and its incarcerated leader, who has become a bete noire for the all-powerful military. PTI’s digitally savvy team of IT experts worked around this obstacle using AI-generated clips of Mr Khan delivering election speeches; AI had generated the voice. It gave his supporters a new sense of hope, feeling their leader was amongst them. The result of this AI-generated campaign was stupendous, with PTI emerging as the largest party despite credible reports of widespread rigging, which would have otherwise given it a 2/3rd majority.

Similarly, Mr Modi, an ardent user of digital technology, used the AI-assisted Bhashni application to convert his speech in Hindi into Tamil while conducting his campaign in the South, thus successfully bridging the gap in a multilingual nation.

Clearly, AI's role in politics has expanded due to its ability to quickly process large volumes of data and generate previously unattainable insights due to human limitations in data processing speed and accuracy.

However, all the things that have made AI an asset in politics have also made a threat to the integrity and sincerity of democratic processes. It introduced challenges and ethical considerations regarding privacy, misinformation, and influence, underscoring the need for careful regulation and oversight.

On the governmental side, AI aids in cybersecurity efforts to protect electoral and governmental digital infrastructures from hacking and other malicious activities, which are increasingly a concern in maintaining the integrity of elections. AI tools are developed to detect and counteract "fake news" and misinformation online, although the same technologies can, unfortunately, also be used to create sophisticated disinformation campaigns.

Background

Political campaigns now use AI to analyse voter data and tailor messages that resonate with specific demographics. AI algorithms can predict voter behaviour, optimise outreach strategies, and personalise communication, which increases campaign efficiency and effectiveness. AI tools monitor and analyse social media platforms to gauge public sentiment and track the spread of topics. This technology allows political entities to understand current public opinions and predict future trends in real-time, adjusting their strategies accordingly.

AI is also being used to assist in crafting speeches and managing public communications. A U.S. municipal government, which experimented with AI tools for drafting public announcements during the COVID-19 pandemic, demonstrated a practical application. AI systems can suggest adjustments in messaging to better connect with voters.

Deep fakes are hyper-realistic digital fabrications created using advanced deep learning techniques, specifically a subset of machine learning and a prime example of AI’s shadowy side. These networks involve two models: one that generates data (the generator) and another that evaluates its authenticity (the discriminator). Through iterative training, where the generator continuously learns to produce more accurate forgeries and the discriminator learns to detect them, the output gradually becomes indistinguishable from genuine articles.

Deep fake technology originated from academic research and developments in machine learning, particularly through generative adversarial networks (GANs) introduced by Ian Goodfellow and his colleagues in 2014. These early GAN models laid the groundwork for more sophisticated image and video manipulation techniques.

Deep fake technology was largely experimental in the early stages, used in academic settings for benign purposes such as in the film industry for de-ageing actors or recreating deceased characters. The quality of deep fakes improved drastically around 2017 and 2018 with the emergence of open-source software that allowed average users to create convincing fake videos. The democratisation of deep fake technology coincided with a noticeable shift towards more malicious applications. Deep fakes began to emerge as tools for misinformation and propaganda in politics. Notably, fabricated videos of politicians saying or doing controversial things could be rapidly disseminated across social media platforms, influencing public opinion or discrediting individuals.

In recent years, the use of deep fakes in political contexts has become more sophisticated and concerning. For instance, deep fakes have been used during elections to create false endorsements or spread fake news. A synthetic speech falsely attributed to President Biden claimed he made remarks about financial instability had the potential to cause chaos in the markets and mislead corporate leaders, and it almost succeeded. This misuse has raised alarms globally, prompting governments and international bodies to consider new regulations and technological developments to detect and counter deep fakes.

The nature of deep fakes lies in their ability to convincingly replicate the appearance and behaviour of real entities, often humans, making them particularly potent tools for misinformation and manipulation.

Politics: Deep Fakes and the Truth (2)

Common applications include:

  • Facial Image Manipulation: This involves altering or completely fabricating videos and images of people, often celebrities or political figures, making them appear to say or do things that never happened. For instance, a well-known deep fake video showed a digitally altered version of former U.S. President Obama delivering a public address he never gave. More maliciously, during the 2020 U.S. elections, deep fakes and manipulated media were used on smaller scales to discredit candidates and spread misinformation, showcasing the potential for such technology to influence electoral outcomes.
  • Voice Synthesis: Deep fake technology can also synthesise voices to create audio recordings that mimic a person’s vocal characteristics so closely that it becomes difficult to distinguish between the fake and the genuine voice. This example has already been cited while mentioning Imran Khan's speeches from prison.
  • Body Movement and Expression Mimicry: Beyond faces and voices, deep learning algorithms can also simulate specific body movements and expressions, attributing them to individuals in fabricated scenarios, such as making a political leader appear in a compromising or controversial situation.

Analysis

Deep fakes' realism and potential impact on public opinion, personal reputations, and even international relations require urgent attention from technologists, lawmakers, and the general public to manage and mitigate their risks effectively.

The increasing threat posed by deep fakes has led to a surge in efforts to develop detection technologies. Universities, tech companies, and government agencies are actively researching methods to automatically detect deep fakes by analysing video inconsistencies or using blockchain to verify the authenticity of digital media.

India has seen deep fakes used in political campaigns, but with a unique twist: They are sometimes used openly to reach voters in different languages. For instance, in the Delhi legislative assembly election 2020, a political party used a deep fake of a prominent leader speaking Haryanvi, a language he does not speak, to connect with a specific voter base. This use was not to deceive but to broaden communication; however, it raised ethical concerns about the authenticity of communication from public figures.

Deep fakes can sway public opinion by creating convincing false narratives. These changes in perception can be particularly impactful close to elections, where the timely release of such content has the potential to shift voter intentions decisively. The realistic quality of deep fakes means that even when debunked, the initial emotional impact and seed of doubt can linger, which might affect how people vote. In Gabon, during a coup attempt in 2019, a deep fake video of President Ali Bongo, who was recovering from a stroke, was released, allegedly to show him in better health than he was. This was speculated to be an effort to stabilise the political situation by showing the president capable of governing, despite doubts among the public and military about his fitness to rule.

The rise of deep fakes suggests a future where the authenticity of audiovisual content can be questioned, leading to a 'reality apathy' in which people may begin to care less about what is true or false. This undermines the foundation of informed decision-making in democracies.

Assessment

  • The inability to distinguish real from fake content has several psychological effects on voters - Confusion and uncertainty, erosion of trust, emotional manipulation of the public, etc.
  • There is a growing need for robust mechanisms to verify content, educate the public on media literacy, and develop technological solutions to detect and flag deep fake content effectively.
  • Without these measures, the potential for deep fakes to manipulate elections and disrupt democratic processes is immense and deeply concerning.
Politics: Deep Fakes and the Truth (2024)

FAQs

What is deepfake in politics? ›

Politicians including US President Joe Biden and Mayor of London Sadiq Khan have found themselves victims of deepfakes. These are fake images or audio recordings generated by artificial intelligence and have spread rapidly across social media.

Are there laws against deepfakes? ›

Beginning in 2019, several states passed legislation aimed at the use of deepfakes. These laws do not apply exclusively to deepfakes created by AI. Rather, they more broadly apply to deceptive manipulated audio or visual images, created with malice, that falsely depict others without their consent.

What is the political controversy with AI? ›

Many state-level bills focus on transparency, mandating that campaigns and candidates put disclaimers on AI-generated media. Other measures would ban deepfakes within a certain window — say 60 or 90 days before an election. Still others take aim specifically at AI-generated content in political ads.

How are deepfakes affecting society? ›

Social impact

Deepfake videos can also manipulate public opinion and erode trust in media and public sources. The ability to fabricate realistic videos of public figures, politicians, or celebrities saying or doing things they never actually did can have far-reaching consequences for society and democratic processes.

What are some examples of deepfakes? ›

Another example involving celebrities include a TikTok account that posts video of a person who bears the likeness of actor Tom Cruise. The article on MIT's website also provided an example of deepfake: a video featuring the late Former President Richard Nixon announcing a failed moon landing.

Is deepfake a threat? ›

Social engineering Deep fake processes can also be used to carry out targeted phishing attacks (spear phishing) to gain information and data. An attacker can also use this technology to carry out fraud and siphon off financial resources.

What can the government do to stop deepfakes? ›

To combat such abuses, technologies can be used to detect deepfakes or enable authentication of genuine media. Detection technologies aim to identify fake media without needing to compare it to the original, unaltered media. These technologies typically use a form of AI known as machine learning.

What are the arguments against deepfakes? ›

They can be designed to harass, intimidate, demean and undermine people. Deepfakes can also create misinformation and confusion about important issues. Further, deepfake technology can fuel other unethical actions like creating revenge porn, where women are disproportionately harmed.

Does the First Amendment protect AI? ›

Nevertheless, people and companies that use AI to produce content that they claim as their own have First Amendment rights as speakers. And people have rights to read or listen to content produced by AI, even though AI itself has no First Amendment rights.

Why is AI harming society? ›

AI algorithms are programmed using vast amounts of data, which may contain inherent biases from historical human decisions. Consequently, AI systems can perpetuate gender, racial, or socioeconomic biases, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice.

What is the political party run by AI? ›

Det Syntetiske Parti (English: The Synthetic Party) is the world's first political party driven by artificial intelligence with the goal of making generative text-to-text models not merely populist, what they are by default, but democratic.

Does AI violate human rights? ›

AI is infiltrating almost every aspect of what it means to be human. Through its ability to identify, classify, and discriminate, it has the potential to impact almost all our human rights.

What rights do deepfakes violate? ›

Right to privacy and publicity laws

Many jurisdictions require consent to an individual's likeness or personal data, but enforcing these consent requirements for deepfakes, especially when anonymously created, is challenging and infringes on an individual's right for someone to control their image and likeness.

What is the bad side of deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

Why should we be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and pornography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

What are celebrity deepfakes? ›

VERIFY, a team of journalists and researchers that work with newsrooms to fact-check supposed news stories, indicates that “a deepfake video is made using artificial intelligence technologies, like programs that can be used to replace or synthesize faces, speech or expressions of emotions.”

What crime is deepfake? ›

Under the Online Safety Act, which was passed last year, the sharing of deepfakes was made illegal. The new law will make it an offence for someone to create a sexually explicit deepfake - even if they have no intention to share it but "purely want to cause alarm, humiliation, or distress to the victim", the MoJ said.

What are deepfake attacks? ›

Video deepfakes involve altering a person's face or body to look like someone else, and they are often used in celebrity face swaps or political misinformation. Audio deepfakes mimic someone's voice, allowing a person to be convincingly replicated saying things they've never said.

What is the abuse of deepfake technology? ›

On the flip side, deepfakes can be used to spread misinformation, tarnish reputations or perpetrate fraud, such as manipulated ads featuring Joe Rogan endorsing a supplement he didn't endorse. Public figures such as politicians and celebrities are particularly vulnerable victims.

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Cheryll Lueilwitz

Last Updated:

Views: 5955

Rating: 4.3 / 5 (54 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.