DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’

December 29, 2025
in News
OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’

OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role.

OpenAI is hiring a “head of preparedness” to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing.

“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” Altman said.

OpenAI’s push to hire a safety executive comes amid companies’ growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis.

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman said in the social media post.

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.

OpenAI’s previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job.

OpenAI’s efforts to address AI dangers

Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company’s former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.

OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users’ delusions, and claiming conversations with the bot were linked to some users’ suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot.

OpenAI said in August its safety features could “degrade” following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health.

The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a “high” cybersecurity risk as AI rapidly advances. The company is taking measures—such as training models to not respond to requests compromising cybersecurity and refining monitoring systems—to mitigate those risks.

“We have a strong foundation of measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”

The post OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’ appeared first on Fortune.

Trump Calls Gavin Newsom’s Candid Dyslexia Comments ‘a Politically Suicidal Act’
News

Trump Calls Gavin Newsom’s Candid Dyslexia Comments ‘a Politically Suicidal Act’

by TheWrap
March 12, 2026

Donald Trump declared that Gavin Newsom hurt his chances of ever becoming president with candid comments about his lifelong struggle ...

Read more
News

Schumer and Jeffries urged to resign by peace groups

March 12, 2026
News

‘Someone please stop this guy’: Analysts blast Fetterman after ‘painful’ CNN interview

March 12, 2026
News

Why So Much of America Loves Mamdani

March 12, 2026
News

Asia rolls out four-day weeks and work-from-home as emergency measures to solve a fuel crisis caused by Iran war

March 12, 2026
Watch what Atlassian’s CEO said in a 4-minute video on the company’s AI-induced layoffs

Watch what Atlassian’s CEO said in a 4-minute video on the company’s AI-induced layoffs

March 12, 2026
Psaki brutally mocks Hegseth over ‘unflattering’ photo he ‘really doesn’t want you to see’

Psaki brutally mocks Hegseth over ‘unflattering’ photo he ‘really doesn’t want you to see’

March 12, 2026
A State of Wealthy Entrepreneurs Passes a ‘Millionaires’ Tax’

A State of Wealthy Entrepreneurs Passes a ‘Millionaires’ Tax’

March 12, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026