Tech bros aren’t always known for their sensitivity.
In a recent profile in the New Yorker, OpenAI CEO Sam Altman compared his vision of AGI — artificial general intelligence — to a “median human.”
He said: “For me, AGI…is the equivalent of a median human that you could hire as a co-worker.”
It’s not the first time Altman has referred to a median human. In a 2022 podcast, Altman said this AI could “do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.”
Altman’s company happens to be one of the current frontrunners for achieving AGI.
Although a disputed term, AGI or artificial general intelligence has been defined as an AI model that surpasses average human intelligence or can achieve complex human capabilities like common sense and consciousness.
The comparison with median human intellect isn’t new. As writer Elizabeth Weil notes in her Altman profile, the term is used by “many in the tech bubble.” The term is used by AI insiders on Reddit, Twitter, and on blogs.
An August report from McKinsey adopted similar benchmarks, including a graph of technical capabilities where generative AI is expected to match the “median level of human performance” by the close of the decade.
Still the term, especially when it’s used by those in charge of powerful AI models like OpenAI’s GPT-4, has been raising eyebrows.
“Comparing AI to even the idea of median or average humans is a bit offensive,” Brent Mittelstadt, director of research at the Oxford Internet Institute, told Insider. “I see the comparison as being concerning and see the terminology as being concerning too.”
“It’s interesting to use the term median human — that’s importantly different from average human,” Henry Shevlin, an AI ethicist and professor at the University of Cambridge, told Insider. “It makes the quote sound more icky.”
“There is an argument for thinking that Sam Altman could be more sensitive around this stuff,” he added.
However, Shevlin said the profile wasn’t intended to be a scientific paper and some level of quantification was needed in the complex field.
“One thing that current AI architectures and models have shown is that they can achieve basically typical human-level performance. That’s not problematic in itself,” he said. “I feel when we get into things like intelligence people are more touchy, and there are some good reasons for that.”
One reason is that the practice of trying to quantify intelligence has been marred by scientific racism although, Shevlin added, it’s not inherently problematic in itself.
How tech bros are defining this idea of median human intellect is open to question.
Mittelstadt said the link was rarely backed up in terms of a “concrete measurable comparison of human intelligence.”
He said: “I think it’s an intentionally vague concept as compared to having a very specific grounded meaning.”
“There’s all these different benchmarks that are used to evaluate the performance of language models or AGI,” he said. “They might be referring to IQ, for example, but then there’s all sorts of problems with that.”
Traditional measurements for comparing AI and human intelligence have tended to focus on capabilities rather than general intellect.
“A lot of the classic benchmarks have involved things like the ability to play chess, the ability to produce good code, or the ability to pass as a human,” Shevlin said.
But comparing AI with human intelligence at all can be ethically murky and potentially misleading, according to Mittelstadt.
“The problem is that that you’re directly equating the performance of these systems with human capabilities or with human intelligence,” he said. “That is a hugely problematic leap to make because all of a sudden you’re assigning agency, comprehension, cognition, or reasoning to these mechanistic models.”
The post Tech bros keep obsessing about replacing the ‘median human’ with AI appeared first on Business Insider.