Over the last few decades, technology has evolved at a blinding pace, proving time and time again that Moore’s law exists for a very good reason: Every decade, technology is evolving even more rapidly than in the decades preceding it. 

It’s almost mind-blowing to see how artificial intelligence (AI) capabilities have grown, powering many walks of life, from shopping to holidays.. In the midst of it all, how has the UK legal system reacted to AI technology? Is it keeping pace with the latest technologies, or is there some catching up to do? 

Regulating AI

In 2020 alone, 432,000 UK companies adopted AI, spending £46 billion collectively on labor for the development, operation and maintenance of AI technologies. (1)

In 2021, the UK government published a National AI Strategy with a view to becoming an AI superpower and to fully harness the power of AI in addressing pressing challenges such as climate change and public health. 

The UK’s AI strategy marks a step change in the nation’s stance on the fastest growing technology in the world. The Government has plans to launch a new AI program to support further research and development, where it will also take steps to provide greater regulation and support for  organizations hoping to tap into the power of AI. At the time of writing, the nation stands at no. 3 for making private venture capital investment into AI companies and plays host to a third of Europe’s  AI innovators. (2)

The EU is also working on AI laws by which member states can promote security, fairness and accountability in the digital business space. Even China, notoriously averse to anything that might hinder business growth, has introduced a sweeping regulatory crackdown on AI and algorithmic use over the last 12 months.

 Not everyone, however, is as far along the road to proper regulation. In South Africa, for example, the automatic processing of data is regulated by the Protection of Personal Information Act (POPIA), but there are no specific laws governing the development of AI.
 

Why is the Regulation of AI so Important?

While it is refreshing to see the UK and other countries taking steps to provide frameworks within which innovation can flourish whilst offering protections at the same time, many areas require deeper work:  

1. Algorithmic Transparency
One of the principal issues with regulating AI is encouraging companies to be transparent when it comes to how their algorithms work, especially what data they use and how they reach their results. Large companies have been hesitant to provide this information – much of it is proprietary – and the idea of sharing their hard-won methods can be anathema to many innovators. The UK has published guidelines for algorithmic transparency, but these have been live for less than 12 months and are still updated regularly, suggesting that there is plenty of fine-tuning still to do. The EU also has accessible guidelines for ethical algorithmic use, but undoubtedly these will need to adapt with the times as businesses explore new avenues for AI programming. 
 
2. Security Vulnerabilities
The problem with innovation is that for each game-changing solution, there is a hacker ready to pull it apart for their own nefarious purposes. Cyber threats have reached astronomical levels since the COVID-19 pandemic, and for many businesses, it is not simply a question of “if” an attack will occur, but “when”. Cyber criminals innovate almost as quickly as technology companies do, and if AI development is to stay ahead, national leaders need to start imposing stronger regulatory frameworks to ensure people’s data is secure. 
 
The Center for Security and Emerging Technology argues that tackling cyber fraudsters is too great a task for tech innovators to handle alone, and that policymakers must properly understand the threats well enough to assess – and address – the risks involved. 
 
 
3. Bias, Discrimination and Accountability
We covered much of this subject in a previous blog – What is Responsible AI? There, we reflected on how innovators should be developing AI that isn’t subject to their own biases and preconceptions. As humans, we are very much fallible, and we run the risk of installing these biases into AI technology. Similarly, too many innovators lean heavily on the AI’s results or findings. This creates a problem of contestability, and concerns over an AI’s fairness or accountability are only heightened when the developer is reluctant to reveal their methods too. Some businesses are already taking steps towards transparency on this issue, but if policymakers aren’t able to address it adequately also, it will affect the public’s trust in AI t- discouraging businesses from adopting the technology too.
 
4. Other issues to consider
Whilst AI technology is progressing in leaps and bounds, challenges will inevitably arise as it is used in more and more sectors. For example – how do we regulate intellectual property created by an AI? What recourse do users have when they feel they have been judged or treated unfairly by an AI’s algorithms? What liability is there for harm? All of these pertinent questions require honest answers, plus transparent collaboration between business and national leaders, something that thus far has been mostly lacking.

 

How is AI Being Used?

Ai is already being used in legal practice. Software can help lawyers find relevant precedent based on the issues they are exploring and also devise case strategies based on analytics. Machines are becoming increasingly sophisticated in analyzing and helping with contracts. 

Furthermore, the power of AI is being leveraged in many publicly funded sectors.

Cambridge-based Addenbrooke is using InnerEye, developed by Microsoft, to process scans automatically for patients diagnosed with prostate cancer. The system takes an anonymised scan image, outlines the prostate on that image, marks up any areas where it finds tumors and reports back. This has sped up prostate cancer treatment and the hospital is now considering  the possibility of using InnerEye for diagnosing and treating brain tumors, too. . 

In Singapore, lawmakers are experimenting with AI, combined with IoT sensors  to analyze air quality, temperature and pollutants in the air. This data, when combined with AI, may potentially predict where the most air quality issues are, and how to best mitigate the effects. Elsewhere, IBM researchers are in the process of testing a new kind of AI to reduce the severity of air pollution in Beijing. It is worth noting here that Singapore currently has no plans to introduce AI-specific legislation, and the nation’s AI journey will prove an interesting comparison with the aforementioned countries in years to come. 

All of these raise important questions. For example, what should lawyers do to ensure that privileged client information is protected and that they act in the best interests of their clients?

 

Closing Thoughts: The Lack of Regulation Around AI and What's Needed

The UK’s National AI Strategy recognises the importance of AI policy. There is also work being done on providing a more data friendly ecosystem. The UK’s current Strategy identifies many current priorities for AI:

  • Reforms to privacy requirements and IP law
  • Promotion of a UK AI “assurance” ecosystem
  • Data security
  • Investigation into AI’s viability standards 
 
Encouraging though this may be, it should be part of a bigger conversation involving all key stakeholders about the proper regulation of AI. 
With all these issues at the forefront of the collective consciousness, it is possible that the public would be highly sensitized to reported issues of unfair or downright unscrupulous AI use, and support for the technology – and the businesses that employ it – could plummet. All the innovation in the world can’t save a technology once public support has been lost (Google Glasses, anyone?) which is why it is imperative that policy makers and national leaders catch up with innovation and make meaningful, progressive changes. 
 

Our Faculty Says...

Mark Beer

“Firms may seek to avoid implementing AI solutions due to the upfront cost, or perhaps the fact that the improved efficiency might mean that it cuts into their billable hours, but that is arguably a short-sighted stance to adopt. The fact that AI will be able to do in 10s what it may have taken a team of lawyers 20h is a good thing, as it frees up the lawyers to do what they are best at, find innovative solutions, and provide a more tailored service to the needs of their clients. 

All of this is perhaps a long-winded way of saying that the client will be the ultimate beneficiary and that is the most important consideration, as once the market realises this is an option, they will begin to seek out legal advisors who implement these solutions. Therefore I do believe that the market will dictate the future of LegalTech and is likely that those who embrace LegalTech and AI will replace those who do not.”

– Mark Beer, Speaker, OBE – Strategic Advisor and Independent Non-Executive Director.

Want to learn more about AI?

Our globally recognized Legal Technology & Innovation Certificate provides hands-on learning in AI and Legal Tech innovation.

Cohort 4 is now open!