
From Liew Li Xuan
The Financial Times recently profiled a former junior lawyer who built a legal artificial intelligence (AI) start-up now valued at US$5 billion.
Beyond the headline of entrepreneurial success, this story captures something much larger: the way technology is beginning to transform one of the oldest professions in the world.
Law, often seen as resistant to change, is now being reshaped by machine learning, automation, and data-driven systems.
For Malaysia, this moment should be both an inspiration and a warning.
The rise of billion-dollar legal technology companies is not simply about efficiency. It signals a structural shift in how legal services are delivered, consumed and regulated.
In the past, tasks such as reviewing contracts, performing due diligence, or researching precedents required teams of junior lawyers working long hours. Today, many of these functions can be performed faster and often more accurately by algorithms.
AI tools are not replacing lawyers altogether, but they are changing the role of lawyers — pushing them to focus on higher-level judgement, advocacy and strategy, while machines handle the repetitive or technical groundwork.
This transformation is drawing the attention of investors worldwide. Venture capital is flowing into legal tech because the value proposition is clear: reduced costs, increased speed and global scalability.
A firm with a well-trained AI system can deliver services across jurisdictions far more quickly than a traditional law practice. Yet, this rush towards automation raises profound questions of regulation, accountability and equity.
What happens when an algorithm makes a mistake that harms a client? How do we preserve legal privilege in an environment where sensitive documents are being processed by third-party platforms? How do we ensure that AI tools do not encode or amplify bias? These are not abstract concerns.
In many jurisdictions, regulators are already grappling with them. In the European Union, the AI Act is creating a new compliance regime to govern high-risk applications of AI, including those in law and justice.
In the US, state bars are issuing guidance on how lawyers can ethically use AI tools.
Even Singapore has begun integrating AI into dispute resolution, while simultaneously creating frameworks to protect fairness and transparency.
Malaysia cannot afford to lag behind. Our legal system has long prided itself on adapting common law principles to local realities, but the pace of technological change now demands more deliberate action.
On the positive side, Malaysia already has regulatory infrastructure such as the Personal Data Protection Act (PDPA) and the Communications and Multimedia Act (CMA), which can serve as building blocks.
However, neither statute was designed with AI in mind, and both require updating to deal with issues of algorithmic transparency, automated decision-making and cross-border data use.
The risk of inaction is twofold. First, without clear guidelines, AI adoption in the legal sector may proceed in a fragmented or inconsistent way, exposing clients to uncertainty and lawyers to liability.
Second, the benefits of legal technology may accrue only to large firms with the resources to acquire expensive systems, leaving small firms, solo practitioners and underserved communities behind.
Access to justice, already a pressing issue, could become even more unequal if AI tools widen rather than bridge the gap.
What Malaysia needs is a proactive approach. This could include establishing clear rules on the permissible uses of AI in legal practice, with requirements for human oversight where necessary. It also means strengthening data protection and privacy safeguards, given that AI systems are trained on massive amounts of legal documents, contracts and, sometimes, sensitive personal information.
Beyond regulation, there must be a commitment to ensuring accessibility.
Government grants, university incubators, and legal aid initiatives could make AI tools available beyond elite circles, so that innovation serves the many, not the few.
Legal education must also evolve. Future lawyers will not only need to know statutes and case law; they will need to understand how algorithms work, where they fail, and what ethical risks they pose.
Embedding training on technology and ethics in law schools will ensure that the next generation of lawyers can use AI responsibly, not blindly.
At the same time, collaboration between regulators, law firms, universities and start-ups is essential. Regulatory sandbox controlled environments, where AI legal tools can be tested under supervision, would allow Malaysia to encourage innovation while safeguarding rights.
The story of a junior lawyer turned founder of a US$5 billion AI legal company is not just an inspiring anecdote; it is a signal of where the legal profession is heading. The law will always require human judgement, but the tools of practice are becoming computational.
Code, data and models are now part of the lawyer’s toolbox. The challenge for Malaysia is whether we embrace this transformation with foresight or allow ourselves to be overtaken by it.
If we get this right, AI legal innovation could streamline our courts, reduce costs and expand access to justice. If we fail, we risk embedding inequality, undermining rights, and losing trust in the very system that upholds the rule of law.
The choice is ours.
Liew Li Xuan is a youth advocate and founder of LifeUp Malaysia, an organisation dedicated to digital well-being, preventing cyberbullying and promoting scam awareness.
The views expressed are those of the writer and do not necessarily reflect those of FMT.