This article first appeared in Digital Edge, The Edge Malaysia Weekly on October 4, 2021 - October 10, 2021
From the shows we watch, to what we have for dinner and even whom we date, algorithms have replaced our active power of choice.
The appeal is immense, as the positives of algorithmic decisions greatly outweigh the negatives, even though they take away some autonomy. These codes comb through tonnes of personal and granular data and make correlations and predictions that help streamline and navigate our lives in a digitally saturated world.
Often overlooked, however, is the fact that a lot of inherent human biases make their way into algorithms — the building blocks of artificial intelligence (AI) and machine learning systems that we rely upon to automate simple and complex decision-making processes.
The use of AI in decision-making is still in its infancy in Malaysia but gaining momentum as the nation aspires to become a regional leader in the digital economy by 2030, AI being a significant component of this aspiration.
Minister in the Prime Minister’s Department (Economy) Datuk Seri Mustapa Mohamed, during the unveiling of the Malaysia Digital Economy Blueprint in February, said AI-related technologies alone could increase gross domestic product (GDP) by up to 26% — making it the biggest commercial opportunity in the next decade.
Subsequently, the Malaysia Artificial Intelligence Roadmap (AIRmap) — designed by Universiti Teknologi Malaysia experts and supported by industry consultants from the National Tech Association of Malaysia and the Ministry of Science, Technology and Innovation’s (MOSTI) National Science and Research Council — was launched in March to “create a thriving national AI ecosystem”.
Of its primary goals, the AIRmap has set out to establish AI governance. “Artificial intelligence is going to permeate all aspects of life and will inexorably evolve along with one’s cradle-to-grave lifespan. No human activity or product will be left untouched,” states the policy document.
The extracts of the plan show that the team is working on establishing an AI coordination and implementation unit, which will oversee policy direction, mete out an AI code of ethics, evaluate existing laws, cybersecurity, talent development, research and innovation, and more by 2022.
But seeing that the current regulations are likely to come under strain — considering the exponential influence and pace of growth of digital technologies — data experts and analysts urge policymakers to move quickly to ensure that existing laws, regulations and legal constructs remain relevant in the face of technology change.
Although we would like to believe that algorithms are unimpeachable in their decision-making capabilities, skewed input data, false logic or even just the prejudices of programmers mean AI easily amplifies human biases, says Dr Rachel Gong, senior research associate at Khazanah Research Institute (KRI).
“None of [it] is neutral. All of it is shaped by the people who design the algorithm, who write the code, who decide what data should be used to teach the machine,” says Gong.
In June, Gong and a team of KRI researchers published a book titled #NetworkedNation: Navigating Challenges, Realising Opportunities of Digital Transformation, in which they highlight the importance of digital governance, among others.
“As Sofiya Noble points out in her book, Algorithms of Oppression, algorithms themselves are biased even before big data comes into the picture. It’s a point that a lot of people find hard to accept; it’s almost easier to just focus on the data because that sort of shifts the responsibility away from the big companies developing the algorithms and onto history and society more broadly.
“It’s something that underscores all the policy recommendations we make in the #NetworkedNation book, that tech alone is not, and cannot be, the answer. There’s a whole swath of social considerations that need to be taken into account when we make plans to digitalise government services or go cashless or however else we adopt technology,” says Gong.
Escalating instances of AI perpetuating biases that exist in society, particularly discrimination against body size, race and gender, are just the tip of the iceberg.
In 2016, Microsoft’s Tay — an AI Twitter bot that the company described as an experiment in “conversational understanding” — was corrupted in less than 24 hours as people started tweeting the bot with all sorts of misogynistic and racist remarks. Tay started repeating these sentiments back to users.
More recently, social media behemoths Facebook, Instagram and TikTok have come under heavy scrutiny over their censorship policies of coloured people, plus-size individuals and even suppressing posts of those in Palestine when a violent conflict erupted in Israel and the Palestinian territories in May.
In June, Stanford University and University of Chicago researchers spotted discrepancies in mortgage approval between minority and majority groups where AI-powered predictive tools used to approve or reject loans are less accurate for minorities in the US. If financial institutions were to automate the selection process entirely, it could be disadvantageous to the unbanked and underserved.
In an attempt to weed out biases in its AI, microblogging site Twitter resorted to holding a competition to find algorithmic bias in its photo cropping system in March.
The top entry showed that Twitter’s cropping algorithm favours faces that are “slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits”.
The second- and third-placed entries showed that the system was biased against people with white or grey hair, suggesting age discrimination, and favours English over Arabic script in images, proving that AI biases are also more pervasive than ever.
“The interesting thing about technology is how processes in cyberspace are diffused in greater society, causing implications to the economy, social unity and politics,” notes Farlina Said, an analyst in foreign policy and security studies at the Institute of Strategic and International Studies (ISIS) Malaysia.
Apart from discrimination, bias in algorithms also affects competition and consumer experience, she adds. “As users would congregate to large platforms, this would create monopolies and impact fair competition in experience.”
People’s cognitive capabilities such as analytical thinking are also challenged in multiple ways, as algorithms dictate the content we access.
“It can also exaggerate and carve echo chambers, which would challenge traditional efforts of building national unity. Examples such as Cambridge Analytica or the ability of algorithms to suggest content to users mean that the development of echo chambers can pull society deeper into groups.
“While not all groups can have devastating consequences, driving groups heading into extremities can have consequences such as increased radicalisation, rise in anti-vaccine sentiments and communal views. In an environment where moderation would bring stability to development pathways, the echo chambers will impact Malaysia negatively,” says Farlina.
There are two stages to how this bias can creep into a seemingly automated process, says Izad Che Muda, CEO and co-founder of Inference Tech Sdn Bhd, an AI solution provider.
In the training stage, an algorithm learns based on a set of data or certain rules or restrictions. The second stage is the inference stage, in which an algorithm applies what it has learnt in practice; this is where its biases are revealed.
“Algorithm bias happens when a machine learning software produces outputs or predictions that show biases against certain groups. Algorithm biases usually occur in various stages along the machine learning development pipeline,” says Izad.
“First, a machine learning algorithm is an algorithm that learns from data. It is demonstrably powerful, as it can analyse complex and high-dimensional data such as videos, images and even speech, and produce more accurate results.
“But it is only as powerful as the data it feeds on. If we put garbage in, we will definitely get garbage out,” says Izad. Inference Tech specialises in designing computer vision and AI-driven video analytics software.
Take, for example, AI recruiting software. During the data acquisition process, sample bias may happen when the data is not representative of the realities of the environment in which the model is deployed, he points out.
“Say an AI algorithm is modelled using attributes of star employees of certain companies. If the companies do not practise diversity, however, then the sample is not representative of the whole population. This limits the opportunity for a good candidate from a different background to be recognised.
“Next is prejudice bias, which replicates the existing societal bias into the machine learning model itself. For example, Amazon’s AI recommendation system was found to be biased against women. It was trained by observing patterns in résumés submitted to the company over 10 years.
“Because Amazon had hired more men than women in the past, the AI replicates this bias. Even during the processing, an algorithm bias may happen when the developer allocates more weight to irrelevant parameters,” says Izad.
In a multiracial setting such as Malaysia, an AI model trained using data that is not representative of the population will result in a model that exhibits some bias, he adds.
“A facial recognition system that performs well in China may not work for us, as we are of different populations. Even if the model is trained using our data, the bias may still happen. A facial recognition system trained using datasets consisting of mostly Malay men is likely to have a higher error rate for other demographic classes.
“Having these biased AI models making decisions for us will put certain demographic groups at risk of injustice and discrimination. It is important to ensure any AI system deployed in our society reflects us and does not open room for discrimination,” says Izad.
As it is still too early for lawmakers to see just how this technology will have an impact on the public, regulations on AI are not expected to exist till at least 2025, although the AIRmap assured that a code of conduct can be expected as soon as 2022.
The most extensive conversation on AI regulation is happening in the European Union, where governments are already implementing or developing regulations on the use of AI in facial recognition and computer vision, operation and development of autonomous vehicles, challenges arising from conversational systems and chatbots, concerns around AI ethics and bias, aspects of AI-supported decision making and the potential for malicious use of AI, among others.
In Southeast Asia, Singapore is taking the lead; the government has developed a Model AI Governance Framework to help AI practitioners in their systems design and implementation.
KRI’s Gong cautions, however, that policymakers ought to work out the legislation and regulations before the technology is rolled out on a larger scale.
According to the AIRmap, AI is expected to be rolled out in healthcare, education, agriculture, smart cities transport and the public services sector.
The closest legislation to governing data protection in existence today is the Personal Data Protection Act (PDPA) 2010. Data, after all, fuels AI. While the PDPA restricts how personally identifying data may be distributed, it is rarely enforced. And, the act does not apply to the data collected by the federal and state governments, but only to commercial transactions. The PDPA is targeted to be reviewed by 2025, but there has been no information on whether it will be revised.
Gong says: “It seems that a lot of new technology is being proposed very excitedly by people trying to sell software and systems, and legal and regulatory frameworks have not caught up with all these technologies.
“They are certainly not designed to keep pace with how rapidly technologies evolve, and one of the things KRI recommends is a review of existing laws written in and for an analogue world to ensure they can be appropriately applied in a digitalised society.
“The Digital Economy Blueprint does mention that a review is in order, but the targets to review existing laws vary from 2025 to 2030, to say nothing of drafting new ones. In the meantime, I guess existing laws have to be interpreted and applied ad hoc.”
Once a technological tool, whether an app or an algorithm, is implemented on a large scale, it is hard to reverse its effects.
“We need to ask the difficult questions as early in the process as possible, drawing on lessons that we can learn from how other countries have implemented these technologies ahead of us.
“Algorithms are already a black box to how the machine learns. Wherever possible, we should make sure that the rest of the process is as transparent as possible without sacrificing privacy and security,” asserts Gong.
Farlina concurs, adding that the data economy can increase productivity, spur innovation and improve livelihoods.
She says: “It might be hard to put the genie back in the bottle. It may be better to build an ecosystem that can guide and check the development of such [AI] systems instead of opting out of the technology.
“Among the prevention methods that I can think of is setting up the data governance regime that upholds ethical principles and addresses issues of biases in the data sets. Data management is particularly important and it should be a part of industry standard to use high-quality data sets that are free of bias and biased outcomes.”
As there is no law that governs AI, the responsibility for governance should be held collectively, says Izad. “First, both developers and users have to understand and assess how good the AI solution is and how critical the decision is to the problem. In situations where critical judgement is involved, sometimes AI only acts as a guideline or to improve the efficiency and consistency of the work.”
Izad stresses that the onus is on the developer to ensure users understand the accuracy of an AI model and acknowledge that there is still the risk of errors no matter what.
“Again, developers have to continuously improve the accuracy of their models over time and educate users on how machine learning works. They must not over-claim and upsell the capabilities of their software,” he says.
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.