#1
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Price: $19.68
4.6/5
(569 reviews)
(569 reviews)
What Customers Say:
-
Nathan MetzgerThe reality we face, laid out in stark detailIn 2023, hundreds of leading AI research scientists and engineers signed a 22-word statement that reads in full:”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Signatories include AI godfather Yoshua Bengio (the world’s most cited living scientist), AI godfather and turing award winner Geoffrey Hinton, both of the co-authors of the standard textbook on AI, many dozens of independent and academic AI researchers, and even the CEOs and technical leads at the leading AI labs. Separately, the most recent poll of AI researchers publishing in peer reviewed journals found that on average, they set the odds of human extinction from AI at about 1 in 6….What the heck is going on? Why are independents, academics, engineers, and even CEOs all in agreement about this? Why is it that the people who know the most about AI tend to be the most worried? Why are they barreling forward anyways? What about [your objection here]? And is there anything we can do to stop this?This book answers all of that and more. It’s written for a general audience, it explains the situation from the beginning, and it’s a pretty short read. It’s actually an enjoyable book, despite the dire nature of the subject matter. As a bonus, the free online supplementary material provides more technical detail and in-depth answers to many common objections.Soares and Yudkowsky have done research on the topic of AI existential risk for over 1 and 2 decades, respectively, long before these AI companies or LLMs existed, and before the race to superintelligence began. The safety problems they described in their early research have not been solved; if anything, we have mostly learned that getting the creation of powerful AI systems to go well is harder than we previously thought. Today, as more and more of the theory of AI existential risk has become horrifyingly emperical, it looks like we may be running out of time.All is not lost. This is not a book of doom and gloom, though the authors do not shy from grim facts. This is a call to action for citizens and policymakers alike — sensible, ordinary kinds of action that you can take yourself. Read the book. Join a movement. Hold your loved ones close. Help steer the world away from the cliff, so that we may live another day and try again tomorrow.
-
Tyler PArguably the most important book of the 21st centuryThis is arguably the most important book of the 21st century. Whether you know a lot about AI or very little, Yudkowsky and Soares put together an easy to read case on their view and the reality of the existential risk of an Artificial Super Intelligence (ASI). The book is NOT an extremely long read and you do not need to have advanced mathematics knowledge or programming knowledge in order to read and digest the contents; making it perfect for someone who is curious but not deep in the field, but deep enough conversations inside the text for those who are experts in the field or have followed some of Yudkowskys work in the past/present.They put together several arguments and or cases on the risk of ASI, that are easy to comprehend and understand. They also put together the best explanation of how a modern AI is ‘grown’ in 2-3 short pages that I have seen period. And once again you do not need to know advanced mathematics or programming knowledge in order to understand their explanation.I could go on more and more, but what you need to know is the following: This book is worth the purchase price. It is clear and understandable, for your average joe/jane or experts in the field. Whether you are a governmental figure, AI researcher, an “average” citizen wanting to know more about the risks of AI. If you have followed Yudkowskys work in the past this is a well put together synopsis of some of the points he has made previously with some new spins on it, along with Soares making it one of the most understandable and valuable AI texts of 2025.They do not pull punches in this book, they tell it to you straight. But the raw, straightforward truth of the potential danger explained in the text, truly emphasizes the ultimate price that humanity may pay if nothing is changed from the current landscape of, or the lack of, safety measures being made around AI today.There are multiple other resources they offer through the text and via a separate website for those interested, further making the price of this text that much more worthwhile, I would recommend and encourage anyone even the slightest bit curious to purchase this book. It is worth the read.
-
Jacob EgnerImportant, enjoyable, accessibleOverall it is an enjoyable, well written, and well argued book. I really enjoy Eliezer Yudkowsky’s style but other people seem to really dislike it. In this book, Eliezer’s unique style is toned down so that I think it’s still interesting and enjoyable but much more accessible and generally palatable. There’s been good editing to make the book’s points more concise than Eliezer usually makes. The book has a very clear thesis and builds up to it well; there is supplementary material for additional points and to address additional questions/objections. I think it’s a good central resource for making the argument about the existential risk of super intelligent AI. It brings together in a streamlined way a lot of things I’ve read from Eliezer on Twitter and other places.Again I think they’ve done a very good job of making the book accessible to everybody, not just people heavily into AI and rationality. If someone wants to learn about the arguments for why AI is an existential risk, this is probably the best single resource.One blunder I think they make is they mention the possibility of restrictions on having more than 9 top-of-the-line GPUs, neglecting to mention this would be hundreds of thousands of dollars, nowhere close to what many people would think (like 9 $2K GPUs).
-
Amazon CustomerVery informative and a bit scaryEasy read and did a pretty good job of explaining the SAI threat to humanity in layman’s terms. Wished it would have delved a little deeper into deep learning and how graduated decent truly makes AI learning so mysterious. Also, the parables got a little verbose and unnecessarily long. Overall very informative and a bit scary. Worth a read.
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All is one of the best-selling products with 569 reviews and a 4.6/5 star rating on Amazon.
Current Price: $19.68






