I'm fascinated by AI and the the future of life. There's been much talk about AI disrupting the job market and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks? That's why I wrote this book, to help you join the most important conversation of our time. I look forward to hearing your thoughts below and on Facebook.

    2017 Book Tour:
Aug. 29: Launch!
Aug. 30: Reddit AMA
Sep. 8: New York
Sep. 15: Boston
Sep. 25: Stanford
Sep. 26: Google, Berkeley, SF
Sep. 27: Seattle & Redmond
Oct. 5: MIT
Oct. 18: Harvard
Oct. 25: Boston
Oct. 26: Toronto
Oct. 29: Cambridge
Oct. 30: London
Oct. 31: London
Nov. 1: Stockholm
Nov. 7: Lisbon
Dec. 6: Los Angeles
Jan. 8: New York

"This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond." — Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors

"All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark's thought-provoking book will help you join it." — Prof. Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology

"Tegmark's new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation." — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity is Near and How to Create a Mind

"Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down." — Jaan Tallinn, co-founder of Skype

"This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity." — Bart Selman, Professor of Computer Science, Cornell University

"The unprecedented power unleashed by artificial intelligence means the next decade could be humanity's best-or worst. Tegmark has written the most insightful and just plain fun exploration of AI's implications that I've ever read. If you haven't been exposed to Tegmark's joyful mind yet, you're in for a huge treat." —Prof. Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age

"Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers&emdash;AI, cosmology, values, even the nature of conscious experience&emdash;can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions." —Prof. Nick Bostrom, Founder of Oxford's Future of Humanity Institute, author of Superintelligence

"I was riveted by this book. The transformational consequences of AI may soon be upon us—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds." —Prof. Martin Rees, Astronomer Royal, cosmology pioneer, author of Our Final Hour

Editorial Reviews

Science: See the full review here; it's too long to fit in this spot, but here are some of my favorite excerpts: "Whether it's reports of a new and wondrous technological accomplishment or of the danger we face in a future filled with unbridled machines, artificial intelligence (AI) has recently been receiving a great deal of attention. If you want to understand what the fuss is all about, Max Tegmark's original, accessible, and provocative Life 3.0: Being Human in the Age of Artificial Intelligence would be a great place to start. [...] Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements The Second Machine Age's economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence. [...] At one point, Tegmark quotes Emerson: "Life is a journey, not a destination." The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead. —Prof. Haym Hirsh

The Times: Here are the excerpts that made me blush the most, even though I think they're over-the-top: "Tegmark, a Swedish physicist with the smile of Cliff Richard and the mind of a modern Aristotle, is not like most people. In his magnificent brain, each fact or idea appears to slip neatly into its appointed place like another little silver globe in an orrery the size of the universe. There are spaces for Kant, Cold War history and Dostoyevsky, for the behaviour of subatomic particles and the neuroscience of consciousness. [...] Tegmark describes the present, near-future and distant possibilities of AI through a series of highly original thought experiments. [...] Tegmark is not personally wedded to any of these ideas. He asks only that his readers make up their own minds. In the meantime, he has forged a remarkable consensus on the need for AI researchers to work on the mind-bogglingly complex task of building digital chains that are strong and durable enough to hold a superintelligent machine to our bidding. [...] This is a rich and visionary book and everyone should read it." (Full review here)

The Telegraph: See the full review here; it's too long to fit in this spot, but here are some of my favorite snippets from it: "Tegmark's book, along with Nick Bostrom's Superintelligence, stands out among the current books about our possible AI futures. [...] Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It's often very funny, too [...] Do we want to live in a world where we are essentially the tolerated zoo animals of a powerful computer version of Ayn Rand; or will we inadvertently allow the entire universe to be colonised by "unconscious zombie AI"; or would we rather usher in a utopia in which happy machines do all the work and we have infinite leisure? The last sounds nicest, although even then we'd probably still spend all day looking at our phones." —Steven Poole, The Telegraph

Nature: (Full review here.) "The Economist has drily characterized the overarching issue thus: 'The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.' Life 3.0 is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required." —Stuart Russell, Nature

Wall Street Journal: (Full review here, paywalled.) "Lucid and engaging, it has much to offer the general reader. Mr. Tegmark's explanation of how electronic circuitry - or a human brain - could produce something as evanescent and immaterial as thought is both elegant and enlightening. But the idea that machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists [...] Yet the notion enjoys more credence today than a few years ago, partly thanks to Mr. Tegmark." —Frank Rose, Wall Street Journal

Guardian: (Full review here.) "it should be among the most important items on our political agenda. Unfortunately, AI has so far hardly registered on our political radar. ... when science becomes politics, scientific ignorance becomes a recipe for political disaster. ... Max Tegmark's Life 3.0 tries to rectify the situation. Written in an accessible and engaging style, and aimed at the general public, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political systems. Life 3.0 does a good job of clarifying basic terms and key debates, and in dispelling common myths." —Yuval Noah Harari, The Guardian

Publishers Weekly: "Tegmark's smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons—and knotty questions: Will our digital overlords be conscious? Will they coddle us with abudance and virtual-reality idylls or exterminate us with bumblebee-size attack robots? While digerati may be enthralled by the idea of superintelligent civilizations where "beautiful theorems" servce as the main economic resource, Tegmark's future will strike many as a one in which, at best, humans are dependent on AI-powered technology and, at worst, are extinct... Love it or hate it, it's an engrossing forecast." (Full review here.)

Financial Times: (Full review here.) "'I view this conversation about the future of AI as the most important one of our time,' he writes. Life 3.0 might convince even those who believe that AI is overhyped to join in." —Clive Cookson, Financial Times

Kirkus Reviews: "explores one of the most intriguing scientific frontiers, artificial general intelligence, and how humans can grow along with it. [...] most will find the narrative irresistible." (Full review here)

Availability of the book

The book is now available on Amazon. I'm excited that it's coming out in many countries and languages:
CountryPublisherPlanned launch
United States Penguin Random House/Knopf  (buy) August 29 2017
United KingdomPenguin  (buy) August 29 2017
SwedenVolante  (buy) October 5 2017
GermanyUllstein (buy) November 17 2017
NetherlandsMaven Fall 2017
FranceDunod  2018
ItalyRaffaello Cortina 2018
ChinaCheers Publishing 2018
RomaniaHumanitas 2018
GreeceTravlos 2018
RussiaCorpus 2018
HungaryHVG 2018
South KoreaEast Asia Publishing 2018
TaiwanCommonwealth Magazine 2018
PolandProszynski 2018

Contents of the book

Here's how I've organized the book:

I first explore the history of intelligence, from its humble beginning 13.8 billion years ago to a fascinating range of possible futures where life transforms from a minute perturbation to the dominant force in the cosmos. I end with what we personally can do to help life flourish in the future it - which is more than one might think!

Videos related to the book

Here's my wife Meia interviewing me about the book:
Here's my guest appearance on Minutephysics:
Here's Elon Musk, Google DeepMind CEO Demis Hassabis and other great minds explaining whether they think superintelligence (chapters 4-6) will happen and how to create the best future with AI.

Here's a TEDx-talk about consciousness (I go into much greater depth about this in chapter 8):

Podcasts about the book
Articles about the book

Discuss the book

I'd love to hear your questions and comments about these fun topics. Please join me on my Facebook by clicking "Like" and post your thoughts. In addition to hopefully answering your questions there, I'm planning to collect answers to the most common questions in the FAQ section below.

Frequently Asked Questions

Q: What perspective do you offer to the conversation about AI that's new? Why should people read your book?
A: There's been lots of talk about AI disrupting the job market and enabling new weapons, but very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks. Instead of shying away from this question, I focus on it in all its fascinating aspects, preparing readers to join the most important conversation of our time. Here are some questions the book raises: Will superhuman artificial general intelligence (AGI) arrive in our lifetimes? Can and should it be controlled and, if so, by whom? Can humanity survive in the age of AI and, if so, how can we find meaning and purpose if superintelligent machines provide for all our needs and make all our contributions superfluous? Will AGI be conscious? Can AGI help us fulfill the age-old dream of spreading life throughout the cosmos? What sort of future should we wish for? I write from my perspective as a physicist doing AI research at MIT, which lets me explain AI in terms of fundamental principles without getting caught up in technical computer jargon.

Q: Why is AI so important for our future?
A: We've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that says one can't build machines more intelligent than us in all ways. This suggests that we've only seen the tip of the intelligence iceberg, and that there's an amazing potential to unlock the full intelligence that's latent in nature and use it to help humanity flourish.

Q: What inspired you and your team at the Future of Life Institute to invite Elon Musk, Larry Page, and other technology leaders to two conferences on AI in two years, and to give out millions of dollars in AI safety research grants?
A: We did this to transform the conversation on the societal implications of AI from polarized and dysfunctional to mainstream and collaborative. I'm delighted by how things have improved, but as I describe in the book, there are many tough open questions that we still need to answer if we're going to reap the benefits of AI while avoiding calamities. For example, how do we transform today's buggy and hackable computers into robust AI systems that we can really trust?

Q: Might we really go extinct/be wiped out by AI? Which of the many doomsday scenarios you chronicle in your book seem to be the most likely?
A: I think that AI will be either the best thing ever to happen to humanity, or the worst. Although I describe many scenarios that you may love or loathe, the crucial challenge isn't to quibble about which scenario is the most likely to happen to us, but to figure out what we want to make happen, and what concrete steps we can take today to maximize the chances that humanity will flourish rather than flounder in the future.

Q: Do you feel optimistic or pessimistic about our ability to ensure that the uses artificial intelligence remain beneficial?
A: I'm optimistic that we can create an inspiring future with AI - but it won't happen automatically, so we need to plan and work for it! If we get it right, AI might become the best thing ever to happen to humanity. Everything I love about civilization is the product of intelligence, so if we can amplify our human intelligence with AI and solve todays greatest problems, humanity might flourish like never before. But the research needed to keep it beneficial might also take decades, so we should start it right away to make sure we have the answers when we need them. For example, we need to figure out how to make machines learn, adopt and retain our goals. And whose goals should it be? What sort of future do we want to create? This conversation is too important to be left to AI researchers alone!

Q: Do you think the anxieties and questions about robots and artificial intelligence that have been raised by Hollywood are a reflection of the true dangers of AI?
A: Terminator makes people worry about the wrong things. The real threat from advanced AI isn't malice, but competence: intelligent machines accomplishing goals that aren't aligned with ours. The robots in Westworld, Blade Runner and Terminator are surprisingly dumb. Her gives a better flavor of truly superhuman intelligence, yet this hilariously has almost no impact on the labor market. Transcendence gives a better indication of societal impact, but real superintelligence wouldn't be outsmarted by humans any more than you'd be outsmarted by a snail.

Q: What kind of research are you currently working on, and why has this become your primary focus? What issues do you hope to personally work on in your career?
A: My current MIT research focuses on what I call "intelligible intelligence": AI that you can trust because you can understand its reasoning. Today's deep learning systems tend to be inscrutable black boxes, so before putting them in charge of my car, plane or power grid, I'd like guarantees that they'll always work as intended and never get hacked. Before trusting medical or financial advice from a computer, I'd like it to be able to explain its conclusions in language that I can understand. I'm having fun exploring a physics-inspired approach to these challenges. More broadly, I'm excited about developing ways of guaranteeing that future technology is not merely powerful, but also beneficial.


Although I feel very grateful for the large amounts of positive feedback I've received from colleagues, reviewers and others across the web, my book has also received some spirited criticism, centering around the following questions: (I'll add more as I get them!)

Q: Isn't superintelligence mere science fiction?
A: No — many leading AI researchers believe that it's possible in our lifetime, and superintelligence is mentioned in the Asilomar AI Principles that have been signed by over a thousand AI researchers from around the world, including AI leaders from industry and academia. It won't be like in the movies, though! This book surveys and analyzes the fascinating spectrum of arguments about when and what can and should happen.

Q: Isn't this baseless scaremongering that jeopardizes AI funding?
A: First of all, I feel that this is a fundamentally optimistic book, highlighting both the great potential of AI to help life flourish and the great opportunity we have to steer toward a good rather than bad future. The concerns that I describe are shared by a who's-who of leading AI researchers (for example, superintelligence and existential risk are mentioned in the Asilomar AI Principles), and I mention them not to raise alarm, but to help ensure that we devote sufficient research to figuring out how to avoid problems. This is no different than advocating for fire extinguishers and seat belts, except that this time, the stakes are so great that we should get things right the first time rather than learn from mistakes. There's no evidence that advocacy for AI safety research has hurt general AI funding which, contrariwise, has mushroomed in the last few years.

Q: Isn't Max Tegmark a crackpot who knows noting about AI?
A: The work I do with my MIT research group is focused on AI and related topics, as are 7 of my recent technical publications. For example, our paper with Yann LeCun and others on how to efficiently implement unitary recurrent neural networks was recently accepted to ICML. My AI research typically builds on physics-based techniques tracing back to some of my 200+ previous publications. In addition to this nerdy technical background, I've had the fortune to learn a great deal about AI's technical and societal implications from my work with the Future of Life Institute and from the AI leaders who participate in our conferences, discussions and research. The question of whether I'm a crackpot obviously isn't for me to answer, but I found this analysis of the question hilarious.

We've assembled a rich collection videos, podcasts, article links, etc. on our FLI AI resource page, organized by topic and level of accessibility.