
Deepfakes are AI-generated media that use advanced digital manipulation to convincingly depict a person as saying or doing something they never actually said or did.
Foong Cheng Leong, deputy chair of the Bar Council’s intellectual property committee, said that under the Personal Data Protection Act 2010 (PDPA), data subjects are dependent on the commissioner to initiate action on their behalf.
“This may take time. The PDPA should allow data subjects to take private action, including to obtain orders (against platform providers or advertisers) for an injunction or damages as well as to take down the content involved,” he told FMT.

He said Section 233 of the Communications and Multimedia Act 1998 and certain provisions in the Penal Code are wide enough to enable criminal proceedings to be brought over the misuse of deepfakes.
However, victims have difficulty seeking civil redress as there is no right to private action under the PDPA.
“But damages may be sought if other causes of action like invasion of privacy (involving the use of very private information) or copyright infringement (such as movies or photographs) are applicable,” said Foong.
Last week, deputy communications minister Teo Nie Ching told the Dewan Rakyat that the ministry has, this year alone, received 121 complaints on high-profile impersonation cases, including those involving the Yang di-Pertuan Agong, the Pahang sultan, the prime minister, and MPs.

She said the Malaysian Communications and Multimedia Commission (MCMC) responded by submitting 121 requests for Meta to remove the content.
Teo said that the ministry was also studying the approach adopted in Taiwan, where platforms are required to verify both the advertiser and the party responsible for publishing the advertisements.
She was responding to a question from Stampin MP Chong Chieng Jen, who asked whether platforms such as Facebook, Instagram, TikTok, and Xiaohongshu have ever faced criminal charges or civil action in Malaysia over scam-related advertisements published on their sites.
Asked if the government would consider initiating a test case by taking platform providers to court, Teo said it was an approach that could be explored with the Attorney-General’s Chambers.
Last month, it was reported that Denmark was working on legislation to grant individuals legal rights over their body, facial features and voice in a bid to clamp down on the creation and dissemination of deepfakes.
The proposed bill would also give Danish people the right to demand that online platforms remove content shared without consent.
Asked if Malaysia should follow suit, Foong explained that copyright law generally protected the expression of an idea, such as artistic or literary works but not a person’s features, which fell under personal data.
“This (proposal) flips the whole fundamental understanding of what is copyright. The entire copyright law will need to be changed to cater to a person’s features being protected by copyright. It would cause more chaos in the law rather than giving an advantage,” he said.
A forward-thinking approach

Cybersecurity practitioner Murugason R Thangaratnam, however, described the Danish government’s proposed law as a forward-thinking approach to eliminate harm at its source.
He said that while Malaysia had existing frameworks such as the Copyright Act 1987 and the PDPA, these laws were not designed with AI-generated threats in mind and therefore offered limited protection to individuals whose image, voice and identity are being manipulated without consent.
“While technological detection tools have a role, they are inherently reactive; safeguarding the fundamental right to control one’s image and identity demands proactive legal measures,” he added.
Murugason was also in favour of strengthening the PDPA, saying biometric and synthetic data such as facial scans and voice prints used in AI-generated content should fall under the Act’s purview.
He suggested introducing deepfake-specific provisions into cybercrime laws to criminalise the malicious use of synthetic media.
Murugason urged the government to establish redress mechanisms for victims of deepfake misuse, including expedited takedown processes and legal remedies.
He also called for national public awareness campaigns to be launched to educate citizens, especially those in vulnerable groups.