OpenAI’s Dave Willner is stepping down as the company’s head of trust and safety. He said Thursday that he will remain with the company as a policy advisor.
Willner shared the news on LinkedIn. In his farewell post, he explained that he was leaving his role at OpenAI to have more time to spend with his family and focus on his passion for helping “smaller companies and early-stage professionals.”
“OpenAI is going through a high-intensity phase in its development — and so are our kids,” he said. “Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallized for me that I was going to have to prioritize one or the other.”
He noted he’d be prioritizing teaching his kids how to swim and ride bikes this summer.
OpenAI representatives did not immediately respond to TheWrap’s request for comment on Willner’s departure.
The former trust and safety lead’s farewell note doesn’t address the ongoing FTC investigation of OpenAI, wherein the company’s capability for consumer harm is being examined. Specifically, the FTC is looking into what kind of public damage products such as ChatGPT can do in terms of circulating false information and collecting sensitive data.
OpenAI CEO Sam Altman publicly addressed the investigation on Twitter, emphasizing that his team prioritizes safety when building their products and that the company’s language models are designed to learn about the world at large rather than private individuals.
OpenAI’s data collection methods and ways of helping its language models “learn” have been hot topics in recent months. For example, early in July, comedian Sarah Silverman and two novelists launched a lawsuit against the company, saying it infringed on their copyrights by training ChatGPT on their works without their consent.
While the lawsuit’s been labeled somewhat weak by experts, it’s more or less agreed that the implications of the outcome will be impactful no matter how the suit shakes out.