Monday, December 23, 2024

OpenAI and Google DeepMind workers warn of AI industry risks in open letter

Must read

A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

OpenAI defended its practices in a statement, saying that it had avenues such as a tipline to report issues at the company and that it did not release new technology until there were appropriate safeguards. Google did not immediately respond to a request for comment.

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson said.

Concern over the potential harms of artificial intelligence have existed for decades, but the AI boom of recent years has intensified those fears and left regulators scrambling to catch up with technological advancements. While AI companies have publicly stated their commitment to safely developing the technology, researchers and employees have warned about a lack of oversight as AI tools exacerbate existing social harms or create entirely new ones.

The letter from current and former AI company employees, which was first reported by the New York Times, calls for increased protections for workers at advanced AI companies who decide to voice safety concerns. It asks for a commitment to four principles around transparency and accountability, including a provision that companies will not force employees to sign any non-disparagement agreements that prohibit airing risk-related AI issues and a mechanism for employees to anonymously share concerns with board members.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter states. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

Companies such as OpenAI have also pursued aggressive tactics to prevent employees from speaking freely about their work, with Vox reporting last week that OpenAI made employees who leave the company sign extremely restrictive non-disparagement and non-disclosure documents or lose all their vested equity. Sam Altman, OpenAI’s CEO, apologized following the report, saying that he would change off-boarding procedures.

The letter comes after two top OpenAI employees, co-founder Ilya Sutskever and key safety researcher Jan Leike, resigned from the company last month. After his departure, Leike alleged that OpenAI had abandoned a culture of safety in favor of “shiny products”.

The open letter on Tuesday echoed some of Leike’s statement, saying that companies did not display any obligation to be transparent about their operations.

Latest article