Cybersecurity guidelines for developers working on new AI systems have been published by the UK and 17 of its allies. This is the government’s latest move to take a leading role in the debate over AI safety, following an international summit held at Bletchley Park earlier this month.
According to the UK’s National Cyber Security Center (NCSC), the guidelines aim to improve the level of cybersecurity for artificial intelligence and help ensure that artificial intelligence is designed, developed and deployed safely.
These will be formally announced this afternoon at an event hosted by the NCSC and attended by 100 industry and public sector partners.
New cybersecurity guidelines for AI development announced
of Guidelines for safe AI system development was developed by the NCSC and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) in collaboration with industry experts and 21 other international organizations and agencies around the world.
These will help developers of systems that use AI make informed cybersecurity decisions at every stage of the development process, the NCSC said. This includes systems created from scratch and systems built on top of tools and services provided by other companies.
These are expected to help ensure that developers adopt a “secure design” approach to building AI systems, incorporating cybersecurity into new designs.
NCSC CEO Lindy Cameron said: “AI is developing at an incredible pace, and we know that keeping up will require concerted international action across governments and industry.
“These guidelines will help build a truly global shared understanding of cyber risks and mitigation strategies for AI to ensure that security is a core requirement across development, rather than an afterthought. This is an important step.”
Content from partners
The guidelines are categorized into four main areas: Secure Design, Secure Development, Secure Deployment, and Secure Operations and Maintenance, each of which includes recommended behaviors to help improve security.
CISA Director Jen Easterly said the guidelines are “an important milestone in the joint effort by governments around the world to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”
Mr. Easterly continued, “Uniting nationally and internationally to advance the principles of secure by design and cultivate a strong foundation for safely developing AI systems around the world will support the evolution of our shared technology.” “This could not come at a more important time in the world,” he added.
“This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border cooperation in securing our digital future.”
UK seeks to lead conversation on AI safety
In addition to the UK and US, other countries supporting the guidelines include Germany, France and South Korea.
These are based on the findings of the International AI Safety Summit, hosted by the UK Government at Bletchley Park and attended by government officials, the world’s leading technology vendors and AI labs.
The Bletchley Declaration was agreed at the event, with signatories pledging to work closely together on AI safety. Developers such as OpenAI and Anthropic have also agreed to submit next-generation or frontier AI models for testing by the UK’s recently announced AI Safety Institute. Chancellor Rishi Sunak said the institute would be the first of its kind in the world, but the US government has also set up similar institutions.
Michelle Donnellan, Technology Secretary, said: “We believe the UK is a holder of international standards when it comes to the safe use of AI. The publication of these new guidelines by the NCSC ensures that cyber security is at the heart of every stage of AI development. , protection from risk will be considered holistically.”