NatWest banner image
Organisational design | Sep 5

Regulating AI: what should the tech sector expect?

Organisational design | Sep 5

Keeping on top of new technology and data rules is crucial in a constantly evolving tech industry. Here are some insights on monitoring compliance to stay ahead of the beat.

Reading Time 4 minutes

Data and digital technologies are transforming the way we live, work, and learn

In the tech world, data is often described as the ‘new oil’. Why? Because it’s seen as a valuable commodity with the power to transform business in today’s digital economy. An asset that, when harnessed correctly, could unlock substantial value.

Data can create new opportunities for business growth across sectors and as we move forward, it’s imperative we match our technological progress with robust ethical frameworks and regulations.

There are necessary protections in place to ensure peoples’ data is kept safe and used appropriately. Existing UK data laws to help protect consumers include the Data Protection Act 2018, which controls how personal information is used by organisations, businesses, or the government. It’s the UK’s implementation of the General Data Protection Regulation (GDPR).

Artificial Intelligence: new UK legislation is being considered

While there are no UK laws that were explicitly written to regulate AI, it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes, which now also capture uses of AI technologies.

UK data protection law includes specific requirements around ‘automated decision-making’ and the broader processing of personal data, which also covers processing for the purpose of developing and training AI technologies. The upcoming Online Safety Bill also has provisions specifically concerning the design and use of algorithms.

The Government’s AI White Paper, which was published in late 2022, was a much-anticipated introduction to the legislation of AI. The paper focuses on a principles-based approach to theorise how the rapid expansion in AI utilisation will be monitored and ensures that we are acting in a proportionate manner.

The UK government says it is committed to developing a pro-innovation position on governing and regulating AI and recognises the potential benefits of the technology. This should mean that any rules developed to address future risks and opportunities to both support businesses to understand how they can incorporate AI systems and ensure consumers that such creations are safe and robust.

What does AI regulation look like in other jurisdictions?

The EU has put the finishing touches to its first AI regulation, the AI Act, which is aiming to establish a legal framework for the development, deployment, and use of AI systems in the EU.

In the US, tech leaders were summoned to the White House in May to discuss responsible AI development. They covered three areas specifically:

  1. The need for companies to be more transparent with policymakers, the public, and others about their AI systems.
  2. The importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems.
  3. The need to ensure AI systems are secure from malicious actors and attacks.

As we move into a new era of automation, and generative AI, here are some key considerations for security, regulation, compliance, and people management.

Seven considerations for businesses using AI systems

  1. We need to better understand the problems we are trying to solve. Implementing AI will be significantly more effective if the problem is well understood first. AI cannot be used competently without effective prompting and will not always produce perfect results.
  2. Build a talented, multidisciplinary team that can test the AI system thoroughly and improve the distribution of data.
  3. Start with small, manageable projects, and build from there.
  4. Prioritise ethics and transparency and consider how to explain the use of AI systems – what you’ve asked it to do and why. Does this meet your ethical requirements?
  5. Foster a culture of experimentation and learning, especially within bigger businesses where there may be a deeper cultural change required.
  6. Focus on governance, data quality, and transparency.
  7. Stay on top of regulatory developments.

Seven principles regulators might focus on

In its AI Whitepaper, the government says it expects well governed AI to be used with due consideration to concepts of fairness and transparency. Similarly, it expects all actors in the AI lifecycle to appropriately manage risks to safety and to provide for strong accountability.

Some of the principles regulators might look at in your sector include:

  1. Ensure AI is used safely and securely.
  2. Ensure data accuracy.
  3. Retain oversight.
  4. Ensure AI systems are used responsibly and ethically.
  5. Clarity on ownership of second-hand outputs using AI.
  6. Transparency so users can make informed decisions on how they use AI.
  7. Assigning ownership to data.

This article was originally published on the NatWest Business Insights website here

Latest articles

Find Out More

Help to Grow: Management logo
Female business leader smiling
Don’t forget, multiple participants can now join the course

Two leaders or senior managers from a business with 10 to 249 employees can now attend the 12 modules of learning and get the benefits of one-to-one mentorship.