Earlier this year, a Google engineer made headlines when he claimed that the AI chatbot the company was developing had become sentient, comparing it’s intelligence to that of “a seven-year-old that just happens to know physics.” Google, of course, rubbished the claim, but as legislators across Europe and the US increasingly demand that high-level AIs be subject to government scrutiny, the former employee’s bold claims thrust the debate around responsible AI into the limelight.  

The debate surrounding responsible AI development and use is not a new one, but as artificial intelligence becomes ever-more sophisticated and ingrained in our daily and working lives, there is still little consensus as to where the line should be drawn… 

Artificial Intelligence  presents remarkable opportunities to businesses, but not without equally remarkable responsibility. After all, AI has had a direct impact on the lives of not just consumers but also people in general, raising a fair amount of questions around trust, legality, data governance and ethics. 

As pressure around ‘responsible’ AI mounts, companies continue to find innovative ways to offer vastly superior products with AI at the core. However, they need to be very mindful of regulation around AI and the steps they must undertake to ensure compliance. 

What exactly is Responsible AI, then?

To put it simply: it’s the practice of designing, developing and deploying AI that has objectively “good” intentions at the centre in order to benefit employees and the business itself, but also impact customers and society at large in a fair and positive way. These can range anywhere from facial recognition and security applications to medical screening and biological conservation projects. These positive AI’s that we see and use on a daily basis have innured us to their use, and this allows businesses to earn more trust and to scale their AI with confidence. 

Responsible AI – Creation and History

PwC, the second-largest professional services network in the world, started researching on topics related to responsible AI in 2017 and launched the first Responsible AI toolkit in 2019. Along with the launch, they unveiled a survey with the underlying purpose of understanding the key priorities, concerns and organisational maturity required to deploy AI in a responsible way. 

At the time this concept or theory was introduced, companies were still getting used to the idea of deploying AI. Early adoption of any new technology, while beneficial, can be fraught with pitfalls, and As a consequence, businesses found themselves implementing inconsistent practices and encountered many hindrances along the way due to being less aware of the inherent risks around AI deployment. 

Since the launching of the first Responsible AI toolkit, entire industries have been able to come up with vastly improved end-to-end solutions based on Responsible AI, including IBM and Microsoft, which we briefly touch upon at the end of the article. 

Since then, PwC has also updated its Responsible AI framework, enabling businesses to deploy AI far more ‘responsibly’ and effectively, while complying with the latest regulations. 

The components of responsible AI 

Responsible AI frameworks have been put in place to mitigate or, in some cases, eliminate the risks and dangers around machine learning (ML), which is a key part of AI. The core principles or components of responsible AI are as follows:


This principle states that AI systems should be designed in a way to treat everyone fairly and they must not affect similarly-situated people in different ways, for example.. So, in short: they have to be completely unbiased.

People tend to have biased judgement but computer systems (in theory, at least) are potentially fairer, although ML models do rely on real-world data which means they are likely to have some biases. Facebook’s ad-serving algorithm comes to mind, which was accused of being discriminatory, as it reproduced gender disparities when displaying job listings.  

A further, more alarming finding was from a trial in the US of an algorithm (PSA Algorithm) to help streamline and fix the bail system.  The Pretrial Justice Institute (PJI) urged New Jersey to use a risk assessment tool in 2014. The PJI has for years supported the use of such tools in place of cash bail, helping them spread to most states in the US.  In 2020 however, it posted an online statement stating that such tools had no place in pretrial justice because they perpetuate racial inequities.

The PJI said “We saw in jurisdictions that use the tools and saw jail populations decrease that they were not able to see disparities decrease, and in some cases they saw disparities increase,” says Tenille Patterson, an executive partner at PJI.

She referred to New Jersey state figures that showed jail populations fell nearly by half after the changes, which took effect in 2017, eliminating cash bail and introducing the PSA algorithm. But the demographics of defendants stuck in jail stayed largely the same: about 50 percent black and 30 percent white. (Source: Wired.co, https://www.wired.com/story/algorithms-supposed-fix-bail-system-they-havent/) 

With examples like these causing such significant impact, it is clear there is a necessity to work towards AI systems that are completely fair and inclusive for all, without unduly causing harm or favouring anyone. 

2. Privacy and security

All AI systems must be built to resist attacks, just like any other promising technology, and protect private information at all costs. 

For example, AI systems must comply with privacy laws and regulatory authorities governing data collection, storage and processing, and also ensure protection of personal information. In the US, there’s HIPAA which protects the privacy of patients’ health records, for example, and in the EU, there’s GDPR which governs data privacy regulation. 

AI systems must comply with all such laws and regulations, where the companies responsible for deployment of AI-based services must ensure that they develop a comprehensive data management strategy to responsibly collect and process data, incorporate on-device training where needed, implement access control mechanisms and promote privacy of ML models. 

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

3. Reliability and safety

This component states that AI systems must operate safely, reliably and consistently, both under normal circumstances and when conditions range from ‘unexpected’ to ‘unknown’. 

This means that companies working with AI must develop systems which offer robust performance, are safe to use, and that they be used in a way so as to minimise any negative impact. To ensure this level of AI reliability and safety, companies must consider unlikely scenarios and how the AI system may respond; figure out how a person using those AI systems can make adjustments on the fly should anything go wrong, and; prioritise human safety above everything else. 

4. Transparency, interpretability, and explainability

It should be easy for people to understand how AI systems arrive upon decisions, particularly when those decisions can have a sizable impact on people’s lives. 

Each company may have their own version of how to make AI more transparent, explainable and interpretable, but here are a few best practices to bear in mind:

  • Come up with an interpretability criteria and add it to a checklist;
  • It should be clear what kind of data the AI systems will use, the underlying purpose and the key factors which may have an impact on the final outcome;
  • At all the various stages of development and testing, document the AI system’s behaviour;
  • Communicate to end-users how a model works;
  • Explain how the system’s mistakes will be corrected.

5. Accountability

People using the AI systems must maintain a certain degree of responsibility and have meaningful control over them. For example, stakeholders playing a part in the development of the AI systems are directly responsible for the ethical implications of those systems. So, some questions that may need answering include:

Are there robust governance models for the AI system?

Have the various roles and responsibilities of individuals involved in the AI development been clearly defined?

The more autonomous the AI system is, the higher the degree of accountability must be for a company responsible for developing, deploying and using the AI system. 

Conclusion: Responsible AI will continue to rise

In stark contrast to the early days of AI responsibility frameworks, many leading technology companies have now taken steps to self-regulate and to try to establish an industry standard for responsible AI development, deployment and use. IBM has a dedicated ethics board overseeing issues around artificial intelligence. Similarly, FICO has come up with responsible AI governance policies to help employees and customers see how ML models within the company work, as well as some of the limitations around them. Microsoft has developed their own responsible AI governance framework through the help of their AETHER Committee (AI, Ethics, and Effects in Engineering and Research) and Office of Responsible AI groups (ORA).

As responsible AI becomes more mainstream, the need for effective and reliable user-focused AI systems will be ever more present, along with best practices which address considerations unique to ML. It is essential that industry and national leaders keep apace with the developments and advances in the technology, for only in truly understanding AI and its impact on society can we establish objective guidelines for its responsible use.

Leave a Reply

Your email address will not be published. Required fields are marked *