I'm fascinated by AI and the the future of life. There's been much talk about AI disrupting the job market and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks? That's why I wrote this book, to help you join the most important conversation of our time. I look forward to hearing your thoughts below and on Facebook.

    2017 Book Tour:
Aug. 29: Launch!
TBD: Reddit AMA
Sep. 8: New York
Sep. 15: Boston
Sep. 25: Stanford
Sep. 26: San Francisco
Sep. 27: Seattle & Redmond
Oct. 5: MIT
Oct. 25: Boston
Oct. 30: London
Nov. 1: Stockholm
Dec. 6: Los Angeles
Jan. 8: New York

"This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond." — Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors

"All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark's thought-provoking book will help you join it." — Prof. Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology

"Tegmark's new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation." — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind

"Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down." — Jaan Tallinn, co-founder of Skype

"This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity." — Bart Selman, Professor of Computer Science, Cornell University

"The unprecedented power unleashed by artificial intelligence means the next decade could be humanity's best-or worst. Tegmark has written the most insightful and just plain fun exploration of AI's implications that I've ever read. If you haven't been exposed to Tegmark's joyful mind yet, you're in for a huge treat." —Prof. Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age

"Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers&emdash;AI, cosmology, values, even the nature of conscious experience&emdash;can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions." —Prof. Nick Bostrom, Founder of Oxford's Future of Humanity Institute, author of Superintelligence

"I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds." —Prof. Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour

Editorial Reviews

Publishers Weekly: "Tegmark's smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons—and knotty questions: Will our digital overlords be conscious? Will they coddle us with abudance and virtual-reality idylls or exterminate us with bumblebee-size attack robots? While digerati may be enthralled by the idea of superintelligent civilizations where "beautiful theorems" servce as the main economic resource, Tegmark's future will strike many as a one in which, at best, humans are dependent on AI-powered technology and, at worst, are extinct... Love it or hate it, it's an engrossing forecast." (Full review here.)

Science: See the full review here; it's too long to fit in this spot, but here are some of my favorite excerpts: "Whether it's reports of a new and wondrous technological accomplishment or of the danger we face in a future filled with unbridled machines, artificial intelligence (AI) has recently been receiving a great deal of attention. If you want to understand what the fuss is all about, Max Tegmark's original, accessible, and provocative Life 3.0: Being Human in the Age of Artificial Intelligence would be a great place to start. [...] Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements The Second Machine Age's economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence. [...] At one point, Tegmark quotes Emerson: "Life is a journey, not a destination." The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead.

Availability of the book

The book is now available on Amazon. I'm excited that it's coming out in many countries and languages:
CountryPublisherPlanned launch
United States Penguin Random House/Knopf  (buy) August 29 2017
United KingdomPenguin  (buy) August 29 2017
SwedenVolante  (buy) October 2017
GermanyUllstein (buy) November 17 2017
NetherlandsMaven Fall 2017
FranceDunod  2018
ItalyRaffaello Cortina 2018
ChinaCheers Publishing 2018
RomaniaHumanitas 2018
GreeceTravlos 2018
RussiaCorpus 2018
HungaryHVG 2018
South KoreaEast Asia Publishing 2018

Contents of the book

Here's how I've organized the book:

I first explore the history of intelligence, from its humble beginning 13.8 billion years ago to a fascinating range of possible futures where life transforms from a minute perturbation to the dominant force in the cosmos. I end with what we personally can do to help life flourish in the future it - which is more than one might think!

Videos related to the book

Here's a TEDx-talk about consciousness (I go into much greater depth about this in chapter 8):

Here's Elon Musk, Google DeepMind CEO Demis Hassabis and other great minds explaining whether they think superintelligence (chapters 4-6) will happen and how to create the best future with AI.

Discuss the book

I'd love to hear your questions and comments about these fun topics. Please join me on my Facebook by clicking "Like" and post your thoughts. In addition to hopefully answering your questions there, I'm planning to collect answers to the most common questions in the FAQ section below.

Frequently Asked Questions

Q: What perspective do you offer to the conversation about AI that's new? Why should people read your book?
A: There's been lots of talk about AI disrupting the job market and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks. Instead of shying away from this question, I focus on it in all its fascinating aspects, preparing readers to join the most important conversation of our time. Here are some questions the book raises: Will superhuman artificial general intelligence (AGI) arrive in our lifetimes? Can and should it be controlled and, if so, by whom? Can humanity survive in the age of AI and, if so, how can we find meaning and purpose if superintelligent machines provide for all our needs and make all our contributions superfluous? Will AGI be conscious? Can AGI help us fulfill the age-old dream of spreading life throughout the cosmos? What sort of future should we wish for? I write from my perspective as a physicist doing AI research at MIT, which lets me explain AI in terms of fundamental principles without getting caught up in technical computer jargon.

Q: Why is AI so important for our future?
A: We've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that says one can't build machines more intelligent than us in all ways. This suggests that we've only seen the tip of the intelligence iceberg, and that there's an amazing potential to unlock the full intelligence that's latent in nature and use it to help humanity flourish.

Q: What inspired you and your team at the Future of Life Institute to invite Elon Musk, Larry Page, and other technology leaders to two conferences on AI in two years, and to give out millions of dollars in AI safety research grants?
A: We did this to transform the conversation on the societal implications of AI from polarized and dysfunctional to mainstream and collaborative. I'm delighted by how things have improved, but as I describe in the book, there are many tough open questions that we still need to answer if we're going to reap the benefits of AI while avoiding calamities. For example, how do we transform today's buggy and hackable computers into robust AI systems that we can really trust?

Q: Might we really go extinct/be wiped out by AI? Which of the many doomsday scenarios you chronicle in your book seem to be the most likely?
A: I think that AI will be either the best thing ever to happen to humanity, or the worst. Although I describe many scenarios that you may love or loathe, the crucial challenge isn't to quibble about which scenario is the most likely to happen to us, but to figure out what we want to make happen, and what concrete steps we can take today to maximize the chances that humanity will flourish rather than flounder in the future.

Q: Do you feel optimistic or pessimistic about our ability to ensure that the uses artificial intelligence remain beneficial?
A: I'm optimistic that we can create an inspiring future with AI - but it won't happen automatically, so we need to plan and work for it! If we get it right, AI might become the best thing ever to happen to humanity. Everything I love about civilization is the product of intelligence, so if we can amplify our human intelligence with AI and solve todays greatest problems, humanity might flourish like never before. But the research needed to keep it beneficial might also take decades, so we should start it right away to make sure we have the answers when we need them. For example, we need to figure out how to make machines learn, adopt and retain our goals. And whose goals should it be? What sort of future do we want to create? This conversation is too important to be left to AI researchers alone!

Q: Do you think the anxieties and questions about robots and artificial intelligence that have been raised by Hollywood are a reflection of the true dangers of AI?
A: Terminator makes people worry about the wrong things. The real threat from advanced AI isn't malice, but competence: intelligent machines accomplishing goals that aren't aligned with ours. The robots in Westworld, Blade Runner and Terminator are surprisingly dumb. Her gives a better flavor of truly superhuman intelligence, yet this hilariously has almost no impact on the labor market. Transcendence gives a better indication of societal impact, but real superintelligence wouldn't be outsmarted by humans any more than you'd be outsmarted by a snail.

Q: What kind of research are you currently working on, and why has this become your primary focus? What issues do you hope to personally work on in your career?
A: My current MIT research focuses on what I call "intelligible intelligence": AI that you can trust because you can understand its reasoning. Today's deep learning systems tend to be inscrutable black boxes, so before putting them in charge of my car, plane or power grid, I'd like guarantees that they'll always work as intended and never get hacked. Before trusting medical or financial advice from a computer, I'd like it to be able to explain its conclusions in language that I can understand. I'm having fun exploring a physics-inspired approach to these challenges. More broadly, I'm excited about developing ways of guaranteeing that future technology is not merely powerful, but also beneficial.


Although I feel very grateful for the large amounts of positive feedback I've received from colleagues, reviewers and others across the web, my book has also received some spirited criticism, centering around the following questions: (I'll add more as I get them!)

Q: Isn't superintelligence mere science fiction?
A: No — many leading AI researchers believe that it's possible in our lifetime, and superintelligence is mentioned in the Asilomar AI Principles that have been signed by over a thousand AI researchers from around the world, including AI leaders from industry and academia. It won't be like in the movies, though! This book surveys and analyzes the fascinating spectrum of arguments about when and what can and should happen.

Q: Isn't this baseless scaremongering that jeopardizes AI funding?
A: First of all, I feel that this is a fundamentally optimistic book, highlighting both the great potential of AI to help life flourish and the great opportunity we have to steer toward a good rather than bad future. The concerns that I describe are shared by a who's-who of leading AI researchers (for example, superintelligence and existential risk are mentioned in the Asilomar AI Principles), and I mention them not to raise alarm, but to help ensure that we devote sufficient research to figuring out how to avoid problems. This is no different than advocating for fire extinguishers and seat belts, except that this time, the stakes are so great that we should get things right the first time rather than learn from mistakes. There's no evidence that advocacy for AI safety research has hurt general AI funding which, contrariwise, has mushroomed in the last few years.

Q: Isn't Max Tegmark a crackpot who knows noting about AI?
A: The work I do with my MIT research group is focused on AI and related topics, as are 7 of my recent technical publications. For example, our paper with Yann LeCun and others on how to efficiently implement unitary recurrent neural networks was recently accepted to ICML. My AI research typically builds on physics-based techniques tracing back to some of my 200+ previous publications. In addition to this nerdy technical background, I've had the fortune to learn a great deal about AI's technical and societal implications from my work with the Future of Life Institute and from the AI leaders who participate in our conferences, discussions and research. The question of whether I'm a crackpot obviously isn't for me to say, but I found this analysis of the question hilarious.

We've assembled a rich collection videos, podcasts, article links, etc. on our FLI AI resource page, organized by topic and level of accessibility.