Farlina Said was quoted in The Edge Malaysia
By Vanessa Gomes, 11 September 2023
From creative copyright infringement to being blamed for the end of the human race altogether, artificial intelligence (AI), specifically generative AI, is going through much public scrutiny. While AI itself is not new and can be traced back to the 1960s, the advancements seen with Open AI’s GPT-4 have pushed the technology into the mainstream, not only democratising access to it but also bringing its possible harmful effects to light.
AI, in general, is regarded as the next technological revolution, and it is estimated to reach a market size of US$2 trillion by 2030, according to Next Move Strategy Consulting.
As for generative AI as a subset, the market is expected to reach US$126 billion by 2030, at a compound annual growth rate of 32% from 2022 to 2030, according to a study by Valuates Reports. The numbers indicate that generative AI will develop rapidly in the next decade, with various applications created for different industries.
However, AI is a dangerously powerful tool that can drastically change the world, says David Lim, CEO of enterprise AI solutions firm Wise AI. On the one hand, productivity and innovation will be at an all-time high. On the other hand, 47% of jobs will be replaced by robots by 2037. The bigger problem is the socioeconomic impact of the loss of jobs, rather than a fear of Terminator-like robots taking over the world.
“Focus should be given to what the country can do to generate enough wealth to support citizens who do not have jobs, such as [imposing a tax on] robots for every employee they replace and creating a universal basic income. After all, hunger is scarier than fear,” says Lim.
“If policymakers consider regulating this area’s technological development, I recommend creating an ethical and responsible framework around job displacement and societal changes.”
While some suggest that AI can assist humans in their jobs, and not replace them altogether, Jun-E Tan, senior research associate at Khazanah Research Institute (KRI), says it might still translate into job losses.
“The displacement of jobs due to generative AI is going to affect white-collar workers in creative industries, although previously the assumption was [that mainly] blue collar workers would be displaced. Technology will keep advancing and we’ll see different types of AI and robotics merge, and job displacement might happen in different ways,” she adds.
Job displacement is a big concern, says Farlina Said, senior analyst at the Institute of Strategic and International Studies (Isis) Malaysia. She believes, however, that it will take two to five years for real industrial displacement to occur as it will take time to adopt, invest in and monitor AI.
“In the next five to 15 years, when people actually know what AI really is and adoption is the norm, that’s when we’ll see societal transformation. The question now is whether people can be retrained to take on other jobs in their industries, because it could also be a case where the job that they are suited for has not even been created yet,” Farlina explains.
Gibu Mathew, vice-president and general manager for Asia-Pacific at software development company Zoho Corp, says the huge wave of generative AI models that have come into consumer use in a relatively short time has changed work as we know it, with content creation seeing a huge disruption.
ChatGPT, for example, processes 400 billion tokens and summarises all human-generated content as part of its learning, making it a super content creator on a variety of topics. This, along with its ability to use neural networks to respond to queries, makes it an interesting revelation for all technology consumers and producers.
“As with all disruptive forces, it will take time for our society and economy to grapple with the effects and implement frameworks to manage concerns about security, ethics and legality that understandably arise,” he adds.
“GPT 1 has been around since 2018. The technology got popular because of ChatGPT-3.5, which brought up a considerable improvement in how the technology was delivered to a user and how users could empathise with it. Hence, even as users learn to assimilate the value it can bring, complaints and concerns will evolve.
“Generative AI cannot differentiate between right and wrong. It can only act based on what it is taught. That said, while smart tech has generated interesting content, the fact that it has learnt others’ work to get there is to be noted.
“It is important to establish attribution of content quickly, both to protect the rights of the original owners as well as to ensure that untrue content is not perpetuated. In traditional content creation, citation of sources and acknowledgement of copyright are standard practices but with generative models, this may not be the case.”
Malaysia is acting to regulate AI. Minister of Science, Technology and Innovation Chang Lih Kang said in July that the government is preparing a bill to regulate the use of AI in a bid to address ethical and transparency concerns surrounding the technology and that the law would, among others, compel AI-generated materials to be publicly labelled.
Companies considering the use of generative AI models in their business are thinking about how to ensure these will be ethically used and leveraged. Nick Eayrs, vice-president of field engineering for Asia-Pacific and Japan at enterprise software company Databricks, believes the best way to do this is to be transparent about the model.
“This concept of open sourcing, where the model is open to be viewed and used by other companies and partners in the ecosystem, is one way to understand how it is built. I think this is a fundamental point and the standard will be open-source models, not closed-source ones,” he says.
The company launched Dolly 2.0, an open-source, instruction-following large language model, fine-tuned on a human-generated instruction dataset licensed for research and commercial use. Eayrs says making clear how it is built is one way of building trust.
The other method seen in the industry is watermarking. AI companies, including OpenAI, Alphabet and Meta Platforms, have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to make the technology safer.
All forms of content, from text, images and audio to videos generated by AI will have a watermark so that users will know when the technology has been used, making it easier for them to spot deepfake images or audio that may, for example, show violence that has not occurred, create a more effective scam or distort a photo of a politician to put the person in an unflattering light.
Simon Dale, managing director of Southeast Asia and Korea at software firm Adobe, says the fundamental belief is that AI, when done right, in a manner that serves the creator and respects the consumer of the experience, will amplify human creativity and intelligence while opening up a world of new possibilities for human creativity and productivity.
However, important decisions still require the human touch. “Self-driving cars are a great example of humans still needing to take the wheel. Each vehicle still has a driver just in case the machine has reached its limits or gets confused. Similarly, if you’ve ever called a call centre, you’ve likely had your question transferred to a human operator because the machine couldn’t understand you,” he explains.
“AI technology should be an asset that helps us make smarter, more informed decisions. In the end, it can’t have the final say — the tech simply isn’t trained to handle new situations. For important decisions, the human touch is still needed.”
Dale says Adobe is developing and deploying generative AI innovations around its AI ethics principles of accountability, responsibility and transparency, and safeguards have been put in place for data collection, addressing biases and human oversight, among others.
“For Adobe Firefly data collection, we train our model by collecting diverse image datasets, which have been carefully curated and pre-processed to mitigate against harmful or biased content,” he adds.
“We also recognise and respect artists’ ownership and intellectual property rights. This helps us build datasets that are diverse, ethical and respectful towards our customers and our community.”
Adobe also addresses bias and tests for safety and harm, where in addition to training inclusive datasets, it continually tests models to mitigate against perpetuating harmful stereotypes. A range of techniques, including ongoing automated testing and human evaluation, is used, says Dale.
AI’s socioeconomic impact needs further study
Given that there are different types of AI applications across various sectors and the definition of AI is nebulous, there are several risks to assess, says KRI’s Tan. Taking that into account, it would be very difficult to come up with a regulatory framework, not just for Malaysia but on a global level too.
For example, there are AI applications that are already a part of our daily lives, such as the Waze navigation app, that go beyond borders. As such, it is difficult to regulate some Big Tech companies that are not based here. Big Tech may find it more of a priority to adhere to regulations in bigger jurisdictions — such as the European Union — than in small countries like Malaysia, if we choose to regulate.
“If we’re trying to regulate AI, there are certain things that fall outside of our regulation and jurisdiction. Therefore, we would need to look at a global governance mechanism, similar to that we’re seeing in the climate change space,” says Tan.
“The issue on climate politics is also that bigger countries have more say in setting the rules [and skewing them to their own interests] and so developing countries like Malaysia band together as a negotiating bloc, to fight for common but differentiated responsibilities, and climate financing by the richer countries. We’re probably going to see a similar situation if a global AI governance mechanism is mooted.”
As the impact of AI is wide and transcends sectors, it should not be an “adopt now, fix later” situation because it would be tough for companies to roll back on their AI adoption and it could cause harm to the company and its employees too.
As such, much care has to be taken as the technology is being rolled out now, says Isis’ Farlina. She admits, however, that the most difficult part of the equation is for governments to deep dive and research AI’s impact on society.
In June, the EU said it was setting rules — the first in the world — on the use of AI by companies. The EU AI Act seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects”.
For policy implementation such as this to take place, Farlina says the government needs to know what the technology does before actually getting to the policy resolution. “The point is, we have to introduce these policies, and looking at how AI works on top of other technologies is going to be a difficult task.”
Who will spearhead this in Malaysia and is Malaysia’s AI industry sophisticated enough to run a sandbox to determine AI’s appetite and impact on the country? Farlina says one way is to learn from the best practices applied abroad.
“The government has to introduce a process where it has the right to ask industries or companies to carry out assessments and intervene if needed. It needs to be able to ask companies to investigate if there’s something wrong, whether it’s biases in the hiring process or with automation. It’s going to be quite hard if the government doesn’t have mechanisms,” she notes.
Different models of AI require different guardrails to address the biases within the system, she adds, but the assumption is that models with self-awareness would have the ability to assess themselves, which may lighten the burden of the human in charge of it. Nevertheless, she says the fundamental rule is that humans should never be removed from the system’s processes.
“There always needs to be checks and balances. Even if the AI is self-aware, it still needs a human to assess whether the system is actually working.”
To illustrate her point, Farlina points to the use of chatbots in a plumbing company and a mental health app. Both sites would employ a chatbot but require different guardrails because of the impact they would have should the AI not perform well. For the plumbing company, the worst outcome would be a user’s appointment not being secured. In the case of a mental health app, the worst outcome could be detrimental to a person’s life.
“You can have the same technology applied to different solutions, but the impact will be different because the user is different. Some consequences are greater than others,” she says.
KRI’s Tan says one area triggered in previous industrial revolutions was social protection because working conditions then were abysmal. Employees fought for social protection and liveable conditions and wages.
Social protection will be at the centre of the AI revolution, where people will need to be protected from harm and their quality of life assured.
“One thing that people are talking about is universal basic income. We need to think about it and other social safety nets, such as universal access to healthcare and other public services, a little bit more. It’s not a case of not wanting to get a job or not having the right attitude to go out and look for a job. It’s just that, there won’t be enough jobs,” he says.
“While AI technologies can impact the individual negatively in some cases, such as privacy violation and behavioural manipulation, there are certain risks that are not at the individual level, such as democracy, trust in institutions, social inequality and the rule of law. They require a deeper look at the structure of how society itself is run and how technology can disrupt that, and we have a huge amount of work ahead of us.”
Data, the gold mine for generative AI models
In the first half of this year, companies across all industries embraced generative AI. More companies have jumped on the AI bandwagon, taking the AI model’s capabilities further using their own data.
Chinese online broker Tiger Brokers announced the launch of its AI investment assistant, TigerGPT, a text-generating AI chatbot on its flagship platform Tiger Trade, leveraging the company’s vast financial content pool and OpenAI technology. TigerGPT basically acts as a super search engine for new and seasoned investors.
Social messaging app Snapchat, meanwhile, built conversational AI into its product. The bot, called My AI, appears as a regular contact in the user’s list of friends and can be called up any time to help with answering questions or simply for entertainment.
With increased public interest and access to generative AI technologies, businesses are wondering how they can leverage these for competitive advantage and to differentiate themselves through their products and services.
Databricks’ Eayrs draws a parallel between the generative AI boom and the California gold rush of the 1840s because in both cases, everyone believed they were going to make a lot of money and rushed into it.
In the case of his company, Eayrs explains that Databricks are the “mining tools” and data is the “gold”. “We provide the tools that allow companies to build, deploy and run their own large language models. We want our customers to use this technology and apply it to their data. It allows you as a company to build your own generative AI and large language models.”
He believes that generative AI isn’t a tech bubble but the way forward, specifically when it comes to solving business problems. He foresees it becoming a part of forward-thinking companies that are data driven and rely on big data analytics.
“Organisations that are data driven and data forward use analytics to compete in the market, to be efficient and effective. They will find ways to use generative AI and large language models to differentiate themselves from a product and service perspective,” he says.
“I think it will become the norm. It will become the standard for data-forward analytics for organisations.”
Increasing efficiencies at lower cost
At advertising agencies, it might have taken weeks to prepare a branding proposal with design mock-ups, social media posts, write-ups, digital strategies, market research and competitor analysis before it could be presented to the client. Wise AI’s Lim points out that with generative AI technologies and customisation efforts, this process will take under 10 minutes, allowing human resources to be channelled to high-value jobs like sales and creativity, with a final human touch rather than administrative tasks.
“With much lower operating costs, we will see an increase in the creation of more micro- to small agencies to help serve SMEs (small and medium enterprises) which previously needed help to afford the agency services. Apart from advertising, industries that are expected to benefit from generative AI include healthcare, education, finance, entertainment, customer service and manufacturing. The losing party will be the ones who do not adopt the technology.”
A new world of possibilities for businesses and users has opened up and generative AI is changing the way we all think about creativity, says Adobe’s Dale.
Creatives can be freed from repetitive tasks and instead use that time to work on an innovative concept or to develop a marketing strategy for a new product launch. Adobe’s new family of creative generative AI models, Firefly, does just that. It allows those who create content to use their own words to generate content the way they dream it up — from images, audio, vectors, videos and 3D to creative elements such as brushes, colour gradients and video transformations — with greater speed and ease.
Implementing generative AI is especially crucial today because the modern consumer is increasingly sophisticated and expects a certain level of interactivity with the brands they deal with, whether it is sharing a compliment, concern or complaint. Zoho Corp’s Mathew says it is no longer efficient or humanly possible for businesses to make available human touch points at every hour of the day.
Lim concurs, adding that from an enterprise’s perspective, it is about how this tool can improve the profit and loss statement.
“It can create AI avatars/digital humans for enterprises to provide personalised customer interaction and replicate them to multiple departments like virtual customer service, virtual salespersons and virtual marketers to serve different groups. This will result in 24/7 productivity, cost savings, improved customer service and better consumer analysis to improve sales and create new revenue streams,” he says.
Business owners will be able to improve decision-making too. “Generative AI can help business managers by analysing large amounts of data and providing insights. This could be done with technology in a fraction of the time that would otherwise not have been done or would be done only by large corporations that can hire people to do it,” says Mathew.
“[There will be] drawbacks and as with all systems, there should be regular checks to ensure that the intended functionality of the AI model is working. Businesses also need to ensure that privacy and security protocols are adequate to protect customer and business data.”
Nevertheless, there are challenges, says Lim, and big enterprises will have a competitive advantage over SMEs as the latter may lack the funding to implement the technology.
“Another common problem in the market is that most enterprises cannot architect this AI transformation within the company due to the lack of know-how in AI, where AI expertise scarcity is a big issue. Again, enterprises with limited access to AI expertise will be left out,” he points out.
This article first appeared in Digital Edge, The Edge Malaysia Weekly on September 11, 2023 – September 17, 2023