Over the last few decades, technology has evolved at a blinding pace, proving time and time again that Moore’s law exists for a very good reason: Every decade, technology is evolving even more rapidly than in the decades preceding it.
It’s almost mind-blowing to see how artificial intelligence (AI) capabilities have grown, powering many walks of life, from shopping to holidays.. In the midst of it all, how has the UK legal system reacted to AI technology? Is it keeping pace with the latest technologies, or is there some catching up to do?
Regulating AI
In 2020 alone, 432,000 UK companies adopted AI, spending £46 billion collectively on labor for the development, operation and maintenance of AI technologies. (1)
In 2021, the UK government published a National AI Strategy with a view to becoming an AI superpower and to fully harness the power of AI in addressing pressing challenges such as climate change and public health.
The UK’s AI strategy marks a step change in the nation’s stance on the fastest growing technology in the world. The Government has plans to launch a new AI program to support further research and development, where it will also take steps to provide greater regulation and support for organizations hoping to tap into the power of AI. At the time of writing, the nation stands at no. 3 for making private venture capital investment into AI companies and plays host to a third of Europe’s AI innovators. (2)
The EU is also working on AI laws by which member states can promote security, fairness and accountability in the digital business space. Even China, notoriously averse to anything that might hinder business growth, has introduced a sweeping regulatory crackdown on AI and algorithmic use over the last 12 months.
Why is the Regulation of AI so Important?
While it is refreshing to see the UK and other countries taking steps to provide frameworks within which innovation can flourish whilst offering protections at the same time, many areas require deeper work:
1. Algorithmic Transparency
2. Security Vulnerabilities
3. Bias, Discrimination and Accountability
4. Other issues to consider
How is AI Being Used?
Ai is already being used in legal practice. Software can help lawyers find relevant precedent based on the issues they are exploring and also devise case strategies based on analytics. Machines are becoming increasingly sophisticated in analyzing and helping with contracts.
Furthermore, the power of AI is being leveraged in many publicly funded sectors.
Cambridge-based Addenbrooke is using InnerEye, developed by Microsoft, to process scans automatically for patients diagnosed with prostate cancer. The system takes an anonymised scan image, outlines the prostate on that image, marks up any areas where it finds tumors and reports back. This has sped up prostate cancer treatment and the hospital is now considering the possibility of using InnerEye for diagnosing and treating brain tumors, too. .
In Singapore, lawmakers are experimenting with AI, combined with IoT sensors to analyze air quality, temperature and pollutants in the air. This data, when combined with AI, may potentially predict where the most air quality issues are, and how to best mitigate the effects. Elsewhere, IBM researchers are in the process of testing a new kind of AI to reduce the severity of air pollution in Beijing. It is worth noting here that Singapore currently has no plans to introduce AI-specific legislation, and the nation’s AI journey will prove an interesting comparison with the aforementioned countries in years to come.
All of these raise important questions. For example, what should lawyers do to ensure that privileged client information is protected and that they act in the best interests of their clients?
Closing Thoughts: The Lack of Regulation Around AI and What's Needed
The UK’s National AI Strategy recognises the importance of AI policy. There is also work being done on providing a more data friendly ecosystem. The UK’s current Strategy identifies many current priorities for AI:
- Reforms to privacy requirements and IP law
- Promotion of a UK AI “assurance” ecosystem
- Data security
- Investigation into AI’s viability standards
Our Faculty Says...

“Firms may seek to avoid implementing AI solutions due to the upfront cost, or perhaps the fact that the improved efficiency might mean that it cuts into their billable hours, but that is arguably a short-sighted stance to adopt. The fact that AI will be able to do in 10s what it may have taken a team of lawyers 20h is a good thing, as it frees up the lawyers to do what they are best at, find innovative solutions, and provide a more tailored service to the needs of their clients.
All of this is perhaps a long-winded way of saying that the client will be the ultimate beneficiary and that is the most important consideration, as once the market realises this is an option, they will begin to seek out legal advisors who implement these solutions. Therefore I do believe that the market will dictate the future of LegalTech and is likely that those who embrace LegalTech and AI will replace those who do not.”
– Mark Beer, Speaker, OBE – Strategic Advisor and Independent Non-Executive Director.
Want to learn more about AI?
Our globally recognized Legal Technology & Innovation Certificate provides hands-on learning in AI and Legal Tech innovation.
Cohort 4 is now open!