Perhaps the first truly global social media trend of 2019 is the #tenyearchallenge. Taking the online world by storm, the challenge entailed users posting two photos of themselves taken a decade apart purportedly to peacock how much they have grown since.

Doing the same for social media and its relationship with democracy would highlight mixed fortunes. For example, a decade ago it was widely claimed that social media would usher in a new era for democracy to flourish.

Based on the promise to provide alternative means to disseminate information, no longer would authoritarian regimes have veto over what information was available to the electorate. Social media can only be beneficial for democracy, as the belief then was.

The results of the three General Elections we have had in Malaysia since 2008 seem to suggest this as well. With most traditional media being allegedly biased towards Barisan Nasional, social media played an immense role in disseminating the then Opposition’s politics and policies.

Yet, this much being true, the adverse effects of fake news on democracy and the enabling role of social media are by now well-documented. Testament to this is the increasingly polarised information environment along racial and religious lines on most of our social media timelines.

While the current government has promised to repeal the deeply problematic Anti-Fake News Act 2018, it has yet to lay out its plans on tackling fake news, what more novel forms of disinformation in the not-too-distant future.

For example, the rudimentary bots we are accustomed to on social media today might be a thing of the past as natural-language processing technology advances. By utilising the same technology that underpins softwares such as Google Assistant, bots of the future will be better able to understand content, and to automatically reply with syntactically and contextually correct political messages.

This increase in finesse will complicate detection and removal of bots, especially when compared to the easily-identified bots of today that operate on keyword-triggered scripts to either reply with preset messages, or to retweet and share certain posts.

Further, technologies such as “deepfakes”, a portmanteau of “deep learning” and “fakes”, will lower the barriers of entry for those bent on creating fake videos that look genuine. By pitting two deep learning algorithms in what is known as a “Generative Adversarial Network”, the “deepfake” software is able to create incredibly realistic fake videos.

Meanwhile, advancements in audio altering technologies seek to further blur the line between what is fake and reality. For example, Adobe in 2016 demonstrated its software called “VoCo” which was able to generate a transcript from an audio waveform – making the alteration of what was said audibly to be as simple as removing or replacing the words in the transcript generated.

The software giant claims that with an input of a mere 20 minutes of a person’s voice recording, VoCo can recreate any word in that person’s voice by leveraging on artificial intelligence (AI) to identify and mash up the individual sounds that make up syllables.

As “deepfakes” and audio altering technologies become mainstream, the barriers of entry to weaponise fake audio-videos will be lowered tremendously. As such, those bent on undermining democracy could very well create fake videos of politicians appearing to say literally anything at all.

In response to this growing threat, technologists suggest that they too can train AI to spot bots, and fake videos and audios. On one hand, while AI-powered software could help detect bots, its removal would remain to be a cat-and-mouse situation between social media companies and bot operators. On the other, even if AI could debunk fake videos and audios, there is no guarantee that it can be removed before going viral on the social media timelines of countless people.

Meanwhile, democratic governments continue to struggle to balance between regulating fake news with maintaining the freedom of speech. Perhaps the worst option is for democratic countries to abandon its enlightened ideals in favour of censorship, even if it is done purportedly in the name of protecting democracy.

Regardless, any new technology or regulation to counter the threats to democracy will only extend as far as the people’s digital literacy skills. It cannot be stressed enough that only by equipping them with the skill sets necessary to critically analyse information, would democracy stand a chance in the future.

Contextually, in Malaysia there are already segments of society that have fallen for primitive forms of fake news that seek to appeal to racial and religious identities for political agendas.

This then begs the question, what hope is there for our democracy with the creation of advanced bots capable of natural language processing, “deepfakes”, and audio-alteration softwares?

Would Malaysians learn to question the centuries-held belief of “seeing is believing”, and to rise beyond baser instincts of racism and prejudice?

While this might sound alarmist, the fact of the matter is that if we do not act swiftly to build the resilience needed to face these challenges in the future, the truth, while remaining to be out there will be crowded out by disinformation. By then, the price that we will have to pay for this, as they say, will be too damn high.

This article first appeared in the New Straits Times on February 3, 2019.

- Advertisement -