
The Online Safety Commission, after receiving victim reports, can demand platforms take down material and mobile applications for users in Singapore, the ministry of digital development and information and ministry of law said in a joint statement.
Governments around the world are responding to the way AI is turbocharging the spread of harmful content on social media, allowing the targeting of victims with everything from increasingly realistic deep fakes to cyber-scams.
The commission forms the centrepiece of a bill introduced in Singapore’s Parliament on Wednesday to expand the online harms covered by existing laws.
The ministry said there’s an “urgent need for stronger protections for victims.” The envisioned agency will initially target serious offences including harassment and image-based child abuse, before expanding its scope to cases such as online impersonation and the non-consensual disclosure of private information.
Social media platforms have hit regulatory challenges in Singapore due to the country’s stringent online harms law. The ministry of home affairs in September ordered Meta Platforms Inc to put in place measures to root out Facebook-based scam advertisements, introduce enhanced facial recognition measures and prioritise review of end-user reports from Singapore.
Alphabet Inc’s Google has pledged to implement age checks by next year as Singapore mandates that app stores block under-18 users from downloading software not intended for their use.
“Bad actors have misused the internet to harass or bully individuals and distributed harmful content like intimate image abuse, with deleterious consequences for victims and society,” the ministry said in a statement.
“The Bill introduces new measures to strengthen online safety and protect Singaporeans from online harms, by empowering victims to seek timely relief and obtain redress.”