In the recently concluded election in Pakistan, the jailed former Prime Minister of Pakistan, Imran Khan, was sealed off by the military establishment from making any public appearance, even virtually. In fact, all symbols associated with his party, the PTI, were erased, including the highly popular party symbol of a cricket bat. The aim was to deny public exposure to the part and its incarcerated leader, who has become a bete noire for the all-powerful military. PTI’s digitally savvy team of IT experts worked around this obstacle using AI-generated clips of Mr Khan delivering election speeches; AI had generated the voice. It gave his supporters a new sense of hope, feeling their leader was amongst them. The result of this AI-generated campaign was stupendous, with PTI emerging as the largest party despite credible reports of widespread rigging, which would have otherwise given it a 2/3rd majority.
Similarly, Mr Modi, an ardent user of digital technology, used the AI-assisted Bhashni application to convert his speech in Hindi into Tamil while conducting his campaign in the South, thus successfully bridging the gap in a multilingual nation.
Clearly, AI's role in politics has expanded due to its ability to quickly process large volumes of data and generate previously unattainable insights due to human limitations in data processing speed and accuracy.
However, all the things that have made AI an asset in politics have also made a threat to the integrity and sincerity of democratic processes. It introduced challenges and ethical considerations regarding privacy, misinformation, and influence, underscoring the need for careful regulation and oversight.
On the governmental side, AI aids in cybersecurity efforts to protect electoral and governmental digital infrastructures from hacking and other malicious activities, which are increasingly a concern in maintaining the integrity of elections. AI tools are developed to detect and counteract "fake news" and misinformation online, although the same technologies can, unfortunately, also be used to create sophisticated disinformation campaigns.
Background
Political campaigns now use AI to analyse voter data and tailor messages that resonate with specific demographics. AI algorithms can predict voter behaviour, optimise outreach strategies, and personalise communication, which increases campaign efficiency and effectiveness. AI tools monitor and analyse social media platforms to gauge public sentiment and track the spread of topics. This technology allows political entities to understand current public opinions and predict future trends in real-time, adjusting their strategies accordingly.
AI is also being used to assist in crafting speeches and managing public communications. A U.S. municipal government, which experimented with AI tools for drafting public announcements during the COVID-19 pandemic, demonstrated a practical application. AI systems can suggest adjustments in messaging to better connect with voters.
Deep fakes are hyper-realistic digital fabrications created using advanced deep learning techniques, specifically a subset of machine learning and a prime example of AI’s shadowy side. These networks involve two models: one that generates data (the generator) and another that evaluates its authenticity (the discriminator). Through iterative training, where the generator continuously learns to produce more accurate forgeries and the discriminator learns to detect them, the output gradually becomes indistinguishable from genuine articles.
Deep fake technology originated from academic research and developments in machine learning, particularly through generative adversarial networks (GANs) introduced by Ian Goodfellow and his colleagues in 2014. These early GAN models laid the groundwork for more sophisticated image and video manipulation techniques.
Deep fake technology was largely experimental in the early stages, used in academic settings for benign purposes such as in the film industry for de-ageing actors or recreating deceased characters. The quality of deep fakes improved drastically around 2017 and 2018 with the emergence of open-source software that allowed average users to create convincing fake videos. The democratisation of deep fake technology coincided with a noticeable shift towards more malicious applications. Deep fakes began to emerge as tools for misinformation and propaganda in politics. Notably, fabricated videos of politicians saying or doing controversial things could be rapidly disseminated across social media platforms, influencing public opinion or discrediting individuals.
In recent years, the use of deep fakes in political contexts has become more sophisticated and concerning. For instance, deep fakes have been used during elections to create false endorsements or spread fake news. A synthetic speech falsely attributed to President Biden claimed he made remarks about financial instability had the potential to cause chaos in the markets and mislead corporate leaders, and it almost succeeded. This misuse has raised alarms globally, prompting governments and international bodies to consider new regulations and technological developments to detect and counter deep fakes.
The nature of deep fakes lies in their ability to convincingly replicate the appearance and behaviour of real entities, often humans, making them particularly potent tools for misinformation and manipulation.
Common applications include:
- Facial Image Manipulation: This involves altering or completely fabricating videos and images of people, often celebrities or political figures, making them appear to say or do things that never happened. For instance, a well-known deep fake video showed a digitally altered version of former U.S. President Obama delivering a public address he never gave. More maliciously, during the 2020 U.S. elections, deep fakes and manipulated media were used on smaller scales to discredit candidates and spread misinformation, showcasing the potential for such technology to influence electoral outcomes.
- Voice Synthesis: Deep fake technology can also synthesise voices to create audio recordings that mimic a person’s vocal characteristics so closely that it becomes difficult to distinguish between the fake and the genuine voice. This example has already been cited while mentioning Imran Khan's speeches from prison.
- Body Movement and Expression Mimicry: Beyond faces and voices, deep learning algorithms can also simulate specific body movements and expressions, attributing them to individuals in fabricated scenarios, such as making a political leader appear in a compromising or controversial situation.
Analysis
Deep fakes' realism and potential impact on public opinion, personal reputations, and even international relations require urgent attention from technologists, lawmakers, and the general public to manage and mitigate their risks effectively.
The increasing threat posed by deep fakes has led to a surge in efforts to develop detection technologies. Universities, tech companies, and government agencies are actively researching methods to automatically detect deep fakes by analysing video inconsistencies or using blockchain to verify the authenticity of digital media.
India has seen deep fakes used in political campaigns, but with a unique twist: They are sometimes used openly to reach voters in different languages. For instance, in the Delhi legislative assembly election 2020, a political party used a deep fake of a prominent leader speaking Haryanvi, a language he does not speak, to connect with a specific voter base. This use was not to deceive but to broaden communication; however, it raised ethical concerns about the authenticity of communication from public figures.
Deep fakes can sway public opinion by creating convincing false narratives. These changes in perception can be particularly impactful close to elections, where the timely release of such content has the potential to shift voter intentions decisively. The realistic quality of deep fakes means that even when debunked, the initial emotional impact and seed of doubt can linger, which might affect how people vote. In Gabon, during a coup attempt in 2019, a deep fake video of President Ali Bongo, who was recovering from a stroke, was released, allegedly to show him in better health than he was. This was speculated to be an effort to stabilise the political situation by showing the president capable of governing, despite doubts among the public and military about his fitness to rule.
The rise of deep fakes suggests a future where the authenticity of audiovisual content can be questioned, leading to a 'reality apathy' in which people may begin to care less about what is true or false. This undermines the foundation of informed decision-making in democracies.
Assessment
- The inability to distinguish real from fake content has several psychological effects on voters - Confusion and uncertainty, erosion of trust, emotional manipulation of the public, etc.
- There is a growing need for robust mechanisms to verify content, educate the public on media literacy, and develop technological solutions to detect and flag deep fake content effectively.
- Without these measures, the potential for deep fakes to manipulate elections and disrupt democratic processes is immense and deeply concerning.