OpenAI CEO Sam Altman said the company is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing.
the advertisementAltman’s article, written in a post on X late Thursday night, was devoid of details. But it – along with Similar deal The legislation passed by OpenAI, which prompted the UK AI Safety Authority to strike in June, appears to be aimed at countering the narrative that OpenAI has deprioritised AI safety in the pursuit of more powerful AI technologies.
In May, OpenAI disbanded a unit that was working on developing controls to prevent “superintelligent” AI systems from going rogue. Reports — including our own — suggest that OpenAI abandoned the team’s safety research in favor of new product launches, which ultimately led to the resignation of two of the team’s leaders, Jan Lycki (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who founded his own safety-focused AI company, Safe Superintelligence Inc.).
In response to a growing chorus of critics, OpenAI said it would cancel Restrictive Non-Abuse Clauses OpenAI pledged to dedicate 20 percent of its time to safety, which implicitly discouraged whistleblowing and created a safety committee, as well as dedicating 20 percent of its time to safety. (The disbanded safety team had promised to dedicate 20 percent of its time to OpenAI, but ultimately did not get that promise.) Altman reaffirmed his commitment to the 20 percent pledge and reiterated that OpenAI had lifted non-disparagement requirements for new and existing employees in May.
But these moves have failed to appease some observers — especially after OpenAI appointed members from within the company to its safety committee, including Altman and, more recently, Re-appointed Senior AI safety officer at another organization.
Five senators, including Brian Schatz, a Democrat from Hawaii, Frequently asked questions On OpenAI’s policies in a recent letter to Altman. OpenAI’s chief strategy officer Jason Cowan She replied To the letter today, write that OpenAI “[is] We are committed to implementing strict safety protocols at every stage of our operation.
The timing of OpenAI’s agreement with the AI Safety Institute seems a bit suspicious given the company’s endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the AI Safety Institute as an enforcement body that sets standards and guidelines for AI models. Together, the moves could be seen as an attempt to seize control of regulations — or at least to exert influence on OpenAI’s part over AI policymaking at the federal level.
It’s no coincidence that Altman is a member of the Department of Homeland Security’s AI Safety and Security Council, which makes recommendations on “the safe and secure development and deployment of AI” across critical infrastructure in the United States. OpenAI has also significantly increased its spending on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023.
The AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, is consulting with a consortium of companies that includes Anthropic, as well as major tech companies like Google, Microsoft, Meta, Apple, Amazon, and Nvidia. The industry group is tasked with working on the measures outlined in President Joe Biden’s executive order on AI in October, including developing guidelines for an AI red team, capability assessment, risk management, safety and security, and watermarking of synthetic content.