DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’

December 29, 2025
in News
OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’

OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.

OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.

“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.

OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.

OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job.

OpenAI’s efforts to address AI dangers

Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.

OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot.

OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.

The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.

“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”

The post OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’ appeared first on Fortune.

Tom Hiddleston Teases Loki Return in ‘Avengers: Doomsday’: ‘It Is Monumental’
News

Tom Hiddleston Teases Loki Return in ‘Avengers: Doomsday’: ‘It Is Monumental’

by TheWrap
December 29, 2025

It seemed back in 2018 that Tom Hiddleston’s time with Marvel had come to an end when Thanos killed Loki ...

Read more
News

Tech giants to test if AI can be the main selling point for consumer gadgets at CES

December 29, 2025
News

MAGA blowing itself up by treating people ‘like they’re trash’: GOP ex-governor

December 29, 2025
News

Trump says he might sue Fed Chair Jerome Powell for ‘gross incompetence’

December 29, 2025
News

Nvidia Removing Unlimited Game Streaming for All Users In a Few Days

December 29, 2025
Ex-GOP lawmaker says Russia is ‘on the verge of collapse’ after Ukraine losses

Ex-GOP lawmaker says Russia is ‘on the verge of collapse’ after Ukraine losses

December 29, 2025
Does Santa Monica need another Trader Joe’s?

Does Santa Monica need another Trader Joe’s?

December 29, 2025
China launches massive military drills off Taiwan after U.S. arms package

China launches massive military drills off Taiwan after U.S. arms package

December 29, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025