The first time I met Eliezer Yudkowsky, he said there was a 99.5 percent chance that A.I. was going to kill me.
I didn’t take it personally. Mr. Yudkowsky, 46, is the founder of the Machine Intelligence Research Institute, a Berkeley-based nonprofit that studies risks from advanced artificial intelligence.
For the last two decades, he has been Silicon Valley’s version of a doomsday preacher — telling anyone who will listen that building powerful A.I. systems is a terrible idea, one that will end in disaster.
That is also the message of Mr. Yudkowsky’s new book, “If Anyone Builds It, Everyone Dies.” The book, co-written with MIRI’s president, Nate Soares, is a distilled, mass-market version of the case they have been making to A.I. insiders for years.
Their goal is to stop the development of A.I. — and the stakes, they say, are existential.
“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die,” they write.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
The post A.I.’s Prophet of Doom Wants to Shut It All Down appeared first on New York Times.