
Cybersecurity expert M Selvakumar of Universiti Sains Malaysia warned that by the time educational campaigns address emerging harms such as cyberbullying, sophisticated grooming tactics, or viral self‑harm trends, platform algorithms may already have amplified these dangers at scale.
“Relying on user awareness merely manages the fallout rather than preventing the harm from occurring in the first place,” he told FMT.
From user responsibility to platform accountability
He said this failure had driven a global policy reset, shifting responsibility away from users and onto technology companies.
“This failure of reactive education has sparked a rapid global policy shift, moving the burden of responsibility from the user directly to tech companies.
“Instead of teaching children how to survive a hostile digital environment, governments are finally regulating the environment itself through mandatory ‘safety by design’,” he said.
Amnani A. Kadir, executive director of Protect and Save the Children, agreed, noting that governments were acting in quick succession because the risks are now widely recognised and shared across countries.
“We are seeing rising anxiety, depression, bullying and online exploitation across borders, alongside growing awareness that both platforms and parents need stronger support and accountability,” she told FMT.
“It has become a global public health problem, with much research conducted on the negative impact of (these) platforms on developing young brains.”
Australia, the turning point
The shift gained momentum after Australia became the first country to introduce a nationwide under-16 social media rule in December 2025. Its “delay, not ban” model places accounts in quarantine until users reach the age threshold, rather than deleting them altogether.
Selvakumar said Australia’s move has provided a workable template that other countries have been able to quickly adopt.
“Once one major democracy took the plunge and absorbed the initial pushback from tech lobbyists, it provided political cover for other nations, like Malaysia, to immediately follow suit without looking like global outliers.”
Countries such as France and Spain have explored similar age thresholds, while parts of Southeast Asia are beginning to move in the same direction with a delayed access approach.
Amnani also stressed that policies must be grounded in the principle of acting in the “best interest of the child”, adding that government intervention had become necessary as platforms have failed to adequately address long-standing harms.
“It means prioritising their safety, development and dignity above convenience or profit, as children are still developing and cannot be expected to manage complex digital environments designed to influence behaviour.
“Platforms have not done enough to prevent harm. In fact, they’ve been hiding data on the harm to children for years. Giving children unrestricted access without supervision is like letting them drive without training,” she said.
Malaysia joining wider child-protection push
Selvakumar said governments have increasingly concluded that voluntary action taken by platforms has been insufficient.
“Governments have lost patience, concluding that companies whose business models rely on harvesting attention and serving ads will never voluntarily restrict their most impressionable user base without the threat of massive financial penalties,” he said.
Amnani said Malaysia’s move should be understood as part of a wider global child-protection effort rather than a standalone restriction.
“Malaysia is acting due to growing public health concerns. Around one in three youths show signs of addictive use, alongside rising mental health issues, cyberbullying and online exploitation,” she said.