The UK’s new white paper on artificial intelligence (AI) regulation highlights a pro-innovation approach and addresses potential risks. Experts say there is a need for a collaborative, principles-based approach to tackle the AI arms race and maintain the UK’s global leadership.
Key figures in AI are also calling for the suspension of training powerful AI systems amid fears of a threat to humanity.
The UK government has released a white paper outlining its pro-innovation approach to AI regulation and the importance of AI in achieving the nation’s 2030 goal of becoming a science and technology superpower.
The white paper is part of the government’s ongoing commitment to invest in AI, with £2.5billion invested since 2014 and recent funding announcements for AI-related projects and resources.
It suggests AI technology is already providing tangible benefits in areas such as the NHS, transportation, and everyday technology. The white paper aims to support innovation while addressing potential risks associated with AI, adopting a proportionate and pro-innovation regulatory framework that focuses on the context of AI deployment rather than specific technologies. This will allow a balanced evaluation of benefits and risks.
The Secretary of State for Science, Innovation and Technology, Rt Hon Michelle Donelan MP, wrote about the paper: “Recent advances in things like generative AI give us a glimpse into the enormous opportunities that await us in the near future if we are prepared to lead the world in the AI sector with our values of transparency, accountability and innovation.
“To ensure we become an AI superpower, though, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments. That includes getting regulation right so that innovators can thrive and the risks posed by AI can be addressed.”
The proposed regulatory framework acknowledges that different AI applications carry varied levels of risk, and will involve close monitoring and partnership with innovators to avoid unnecessary regulatory burdens. The government will also rely on the ‘expertise of world-class regulators’ who are familiar with sector-specific risks and can support innovation while addressing concerns when needed.
To assist innovators in navigating regulatory challenges, the government plans to establish a regulatory sandbox for AI, as recommended by Sir Patrick Vallance. The sandbox will offer support for getting products to market and help refine interactions between regulation and new technologies.
In the post-Brexit era, the UK aims to solidify its position as an AI superpower by actively supporting innovation and addressing public concerns. The pro-innovation approach will incentivize AI businesses to establish a presence in the UK and facilitate international regulatory interoperability.
The government’s approach to AI regulation relies on collaboration with regulators and businesses, and does not initially involve new legislation. It aims to remain flexible as technology evolves, with a principles-based approach and central monitoring functions.
Public engagement will be a crucial component in understanding expectations and addressing concerns. Responses to the consultation will shape the development of the regulatory framework, with all interested parties encouraged to participate.
‘A joint approach across regulators is sensible’
Pedro Bizarro, chief science officer at financial fraud detection software provider Feedzai, comments that the government’s pro-innovation approach to AI regulation provides a roadmap for fraud and anti-money laundering leaders to embrace AI responsibly and effectively.
“A one size fits all approach to AI regulation simply won’t work, and so while we believe a joint approach across regulators is sensible, the challenge will be ensuring those regulators are joined up in their approaches,” says Bizarro.
“The financial industry is no stranger to AI; in fact, it’s at the forefront of its adoption. These five principles pave the way for banks to continue to harness the power of AI to combat financial crime while fostering trust, transparency, and fairness in the process.
“While we wait for the practical guidance from regulators, fraud and AML leaders should review their current AI practices and ensure they align with the five principles. By adopting a proactive approach, banks can stay ahead of the curve and continue leveraging AI to improve fraud detection and AML processes while maintaining compliance with evolving regulations.”
‘Tackle the overarching threat’
The UK government releasing its plans for a ‘pro-innovation approach’ to AI regulation adds credence to the importance of regulating AI, says Keith Wojcieszek, global head of threat intelligence at Kroll.
“Right now, we are witnessing what could be called an all-out “AI arms race” as technology platforms look to outdo each other with their AI capabilities. Of course, with innovation there is a focus on getting the technology out before the competition. But for truly successful innovation that lasts, businesses need to be baking in cyber protection from the start, not as a regulatory box ticking exercise.
“As more AI tools and open-source versions emerge, hackers will likely be able to bypass the controls added to the systems over time. They may even be able to use AI tools to beat the controls over the AI system they want to abuse.
“Further, there is a lot of focus on the dangers of tools like ChatGPT and, while important, there is a real risk of focusing too much on just one tool when there are a number of chatbots out there, and far more in development.
“The question isn’t how you can defend against a specific platform, but how we work with public and private-sector resources to tackle the overarching threat and to discern problems that haven’t surfaced yet. This is going to be vital to the defence of our systems, our people and our governments from the misuse and abuse of AI systems and tools.”
‘Step in the right direction’
Philip Dutton, CEO and founder of data management, visualisation and data lineage company Solidatus, is excited by the potential of AI to revolutionise decision-making processes, but argues that it must be used with precision to guide decisions correctly. He sees a future in which data governance, AI governance and metadata management are all mutually beneficial.
“The UK Government’s recommendations on the uses of AI will help SMEs and financial institutions navigate the ever-growing space, and regulators issuing practical guidance to organisations is welcome if a little overdue.
“We should also recognise the role of data in creating AI. Metadata linked by data lineage plays a critical part in ensuring effective governance over both the data and the consequent behaviour of the AI. High-quality AI will then feed back into AI-powered active metadata, improving data lineage and governance in a beneficial cycle.
“I see a future in which data governance, AI governance and metadata management are all mutually beneficial, creating an ecosystem for high-quality data, reliable and responsible AI, and more ethical and trustworthy use of information.”
The steps the UK are taking in regulating AI are a necessary ‘evil’, suggests Michel Caspers, co-founder and CMO at finance app developer Unity Network.
“The AI race is getting out of hand and many companies who create AI software are just creating it just to make sure they don’t fall behind the rest. This rat race is a huge security risk and the chance of creating something without knowing the true consequences is getting bigger every day.
“The regulations the UK is implementing will make sure that there is some form of control over what is created. We don’t want to create SkyNet without knowing how to turn it off.
“Short term it might mean that the UK AI industry can fall behind others like the US or China. In the long term it will create a baseline with some conscience and an ethical form of AI that will be beneficial without being a threat that humans can’t control.”
‘Threat to humanity’
Separately to the UK white paper release, Elon Musk, Steve Wozniak and other tech experts have penned an open letter calling for an immediate pause in AI development. The letter warns of potential risks to society and civilisation posed by human-competitive AI systems in the form of economic and political disruptions.
The letter said: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control.
“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”
OpenAI, the company behind ChatGPT, recently released GPT-4 technology that can do tasks including answering questions about objects in images.
The letter encourages development to be halted temporarily atGPT-4 level. It also warns of the risks future, more advanced systems might pose.
“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”
‘Need to become more vigilant’
Hector Ferran, VP of marketing at image generator AI tool BlueWillow AI, says that while some have expressed concerns about potential negative outcomes resulting from its use, it’s crucial to recognise that malicious intent is not exclusive to AI tools.
“ChatGPT does not pose any security threats by itself. All technology has the potential to be used for good or evil. The security threat comes from bad actors who will use a new technology for malicious purposes. ChatGPT is at the forefront of natural language models, offering a range of impressive capabilities and use cases.
“With that said, one area of concern is around the use of AI tools such as ChatGPT to be used to augment or enhance the existing spread of disinformation. Individuals and organisations will need to become more vigilant and scrutinise communications more closely to try to spot AI-assisted attacks.
“Addressing these threats requires a collective effort from multiple stakeholders. By working together, we can ensure that ChatGPT and similar tools are used for positive growth and change.
“It’s crucial to take proactive measures to prevent the misuse of AI tools like ChatGPT-4, including implementing appropriate safeguards, detection measures, and ethical guidelines. By doing so, organisations can leverage the power of AI while ensuring that it is used for positive and beneficial purposes.”
Leave a Reply