AI Employees Should Have a ‘Right To Warn’ About Looming Trouble | Commentary

Available to WrapPRO members

Reasonable rules allowing employees at AI research houses to warn about impending problems are sensible given the technology’s increasing power.

AI Right to Warn
AI-generated image by Midjourney/Big Technology

By now, we’re starting to understand why so many OpenAI safety employees left in recent months. It’s not due to some secret, unsafe breakthrough (so we can put “what did Ilya see?” to rest). Rather, it’s process oriented, stemming from an unease that the company, as it operates today, might overlook future dangers.

After a long period of silence, the quotes are starting to pile in. “Safety culture and processes have taken a back seat to shiny products,” said ex-OpenAI Superallignment co-lead Jan Leike last month. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, an ex-governance team employee, soon afterward.

Comments