This is a response to a recent article by the Effective Altruist Ozy Brennan titled “The “TESCREAL” Bungle,” published in Asterisk Magazine. Overall, Brennan’s article is interesting, and I enjoyed reading it. I think that he gets some of my views wrong, while also making some valid points, and I really appreciate him having written this.
In what follows, I’m going to reproduce his article and comment under specific claims. My views are in constant flux as I (try to) think deeper about these issues, so take this as a snapshot of my current thinking. These responses are also a bit conversational and at times a little desultory, so I hope readers will forgive me for that (think of it like a podcast transcript!). The original text is presented as block quotes as follows:
The TESCREAL “bundle of ideologies” is purportedly essential to understand the race to build artificial intelligence, the ethical milieu of those building it, and the philosophical underpinnings behind Silicon Valley as a whole. But does the label actually tell us anything?
A specter is haunting Silicon Valley — the specter of TESCREALism.
“TESCREALism” is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley.
I wouldn’t say that these beliefs are “loosely connected.” There is a very close connection between the seven ideologies referenced in the acronym. Gebru and I provide ample evidence for this in our paper, and my FAQ on TESCREALism also goes into some detail for this (what ought to be an uncontroversial) claim. I think the TESCREAList Elise Bohan does a good job of explaining the interconnections here.
The acronym unpacks to:
Transhumanism — the belief that we should develop and use “human enhancement” technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann’s.
Extropianism — the belief that we should settle outer space and create or become innumerable kinds of “posthuman” minds very different from present humanity.
Singularitarianism — the belief that humans are going to create a superhuman intelligence in the medium-term future.
More specifically, it’s often defined as the belief that we should create superintelligence in the near future, not just that we “are going to” do this.
Cosmism — a near-synonym to extropianism.
I wouldn’t say this about cosmism! Read Ben Goertzel’s A Cosmist Manifesto. Cosmism is something like transhumanism on steroids.
Rationalism — a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people’s ability to make good decisions and come to true beliefs.
Effective altruism — a community focused on using reason and evidence to improve the world as much as possible.
Longtermism — the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.
TESCREALism is a personal issue for Torres, who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times.
The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley — principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen — two influential thinkers Torres and Gebru have identified as TESCREAList — don’t agree on much.
I think they agree about most things. They are libertarians who believe that advanced technologies could usher in a utopian world. Their one small—but simultaneously large—disagreement is over the probability of human extinction if we create AGI in the near future. I write about this fact in detail in this article for Truthdig. When one is embedded within these communities, the differences in opinion between Andreessen and Yudkowsky might appear large. But if you take a step back, it’s clear that they agree about far more than they disagree about. (I’m now seeing that Brennan makes basically this point below. Still, worth underlining here.)
Eliezer Yudkowsky believes that with our current understanding of AI we’re unable to program an artificial general intelligence that won’t wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren’t special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.
Yes, exactly.
As an analogy, Republicans and Democrats don’t seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you’d call this “liberal democracy.” Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it’s easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It’s easy to stumble across Andreesen’s or Yudkowsky’s writing without knowing anything about transhumanism. The TESCREALism concept can clarify what’s going on for confused outsiders.
However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest.
I definitely don’t think that “everyone who believes one of these ideas believes in all the rest” (though I do not want to speak for Dr. Gebru on this issue—I think she may have a different view). I discuss this in a section of my FAQ on TESCREALism here. For me, TESCREALism is that tradition of thinking that weaves through these seven ideologies as they developed over the past ~30 years, is broadly libertarian in flavor (with the most notable exception being, for “AI doomers,” AGI), and has played an integral role in launching, sustaining, and accelerating the race to build AGI.
In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism.
I generally agree, although see—once again—this interesting talk on the topic by Elise Bohan. In many ways, her work focuses on the TESCREAL bundle, even if she doesn’t use the term.
All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.”
Yes, because I think that utopian ideologies can be enormously dangerous. This is not hyperbole: there are endless examples throughout history of such ideologies “justifying” extreme actions for the sake of bringing about some imagined future paradise. In many cases, these utopian movements that became violent started off being explicitly peaceful, as in the case of the Anabaptists and Aum Shinrikyo.
Put differently, if the ends can justify the means (a kind of utilitarian thinking), and if the ends are a literal utopia in which we live forever, create infinite (or “astronomical”) amounts of value, etc., then what exactly is off the table for realizing this future? Let me quote an EA favorite, Steven Pinker, on this point:
Utopian ideologies invite genocide for two reasons. One is that they set up a pernicious utilitarian calculus. In a utopia, everyone is happy forever, so its moral value is infinite. Most of us agree that it is ethically permissible to divert a runaway trolley that threatens to kill five people onto a side track where it would kill only one. But suppose it were a hundred million lives one could save by diverting the trolley, or a billion, or—projecting into the indefinite future—infinitely many. How many people would it be permissible to sacrifice to attain that infinite good? A few million can seem like a pretty good bargain.
There is a very good argument for why one should see TESCREALism as a religious movement. I write about this in a section of my FAQ here.
Atheists, who don’t expect justice to come from an omnibenevolent God or a blissful afterlife, have sought meaning, purpose, and hope in improving this world since at least the writing of the 1933 Humanist Manifesto. It is perfectly natural and not especially sinister. If a community working together to create a better world is sufficient criteria to qualify as a religion, I’m all for religion.
Yes, to be clear: I’m not (necessarily) denigrating TESCREALism when I call it a “religion.” I do not have a problem with religion, in general. Rather, the point is simply to properly identify the nature of this techno-utopian world that anticipates a “vast and glorious” future through AGI, space colonization, and so on (to quote the TESCREAList Toby Ord).
Torres’ primary argument that TESCREALism is dangerous centers on the fondness that effective altruists, rationalists, and longtermists hold for wild thought experiments — and what they might imply about what we should do. Torres critiques philosopher Nick Bostrom for arguing that very tiny reductions in the risk of human extinction outweigh the certain death of many people who currently exist, Eliezer Yudkowsky for arguing that we should prefer to torture one person rather than allow more people than there are atoms in the universe to get dust specks in their eyes, and effective altruists (as a group) for arguing that it might be morally right to work for an “evil” organization and donate the money to charity.
It seems like the thing Torres might actually be objecting to is analytic ethical philosophy.
No: consider that, in the pages of Time magazine, Yudkowsky argued for an international treaty that would sanction military strikes against rogue datacenters—even at the risk of triggering a thermonuclear war. When he was asked on Twitter/X “How many people should be allowed to die to prevent AGI” from being built in the near future, he answered that so long as there are enough survivors in close proximity to rebuild civilization, then “there’s still a chance of reaching the stars someday.” The minimum viable population might be as low as 150, or as high as 40,000. Let’s do the math, then: the global population right now is 8,114,291,961. Subtract 150 and you get 8,114,291,811. Subtract 40,000 and you get 8,114,251,961. Yudkowsky isn’t proposing his policy as a thought experiment: he’s explicitly trying to convince policymakers to adopt it, which could potentially put more than 8 billion people at risk.
Or consider an AI Safety workshop that was held in late 2022, organized by people who’ve worked at MIRI and Open Philanthropy. In the meeting minutes, someone suggested the following strategy for preventing the AGI apocalypse: “Solution: be Ted Kaczynski.” Later on, another person proposed the “strategy” of “start building bombs from your cabin in Montana,” where Kaczynski conducted his campaign of domestic terrorism, “and mail them to DeepMind and OpenAI lol.” This was followed a few sentences later by, “Strategy: We kill all AI researchers.”
It’s only a matter of time, in my view, before someone in the AI safety community believes we’re in an “apocalyptic moment” and concludes that a “justified” and “proportionate” response is to murder someone.
As for “evil organizations” (MacAskill’s actual phrase was “immoral organizations”—I accidentally misquoted him here), earning to give is why Sam Bankman-Fried went into crypto. And we all know how that turned out.
With respect to Bostrom’s thought experiments, yes—it’s absolutely a legitimate concern that some politician will take them seriously. Olle Häggström, who is otherwise sympathetic with longtermism, makes this point better than I could have when he writes:
I feel extremely uneasy about the prospect that [Bostrom’s calculations] might become recognised among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying ‘If you want to make an omelette, you must be willing to break a few eggs,’ which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
Effective altruists, rationalists, and longtermists have no monopoly on morally repugnant thought experiments. Analytic ethical philosophy is full of them. Should you tell the truth to the Nazi at your door about whether there are Jews in your basement? If you’re in a burning building, should you save one child or ten embryos? If an adult brother and sister secretly have sex, knowing that they’re both unable to conceive children, and they both had a wonderful time and believe the sex brought them closer and made their relationship better, did they do something wrong, and if so, why? Ethical philosophers argue both sides of these and many other morally repugnant questions. They’re trying to poke at the edge cases within our intuitions, the places where our intuitive sense of good and bad doesn’t match up with our stated ethical principles.
Right, and I’ve said on numerous occasions that if, say, David Benatar founded an institute that ended up with $46.1 billion in committed funding, if his followers were infiltrating major world governments, if his pro-extinctionist/antinatalist view was being promoted on the front pages of Time magazine, etc. etc., then I would be writing harsh—and urgent—criticisms of his philosophical positions. This is a crucial point: as I’ve also repeatedly said, the world is full of bizarre ideologies that take one to “crazy town”—a term that longtermists themselves use to describe radical (or strong) longtermism. But most of these aren’t worth criticizing because they don’t have power. The TESCREAL movement has an enormous amount of power.
Outside the philosophy classroom, ethicists mostly ignore the findings of their philosophy, as philosophers Joshua Rust and Eric Schwitzgebel have shown in a clever series of studies. Ethicists ignore ethical philosophy in ways we like (presumably even the most committed Kantian would lie if there were actually a Nazi at the door), but also in ways we don’t like (not donating to charity). Rationalists and effective altruists are unusual because they act on some of the conclusions of ethical philosophy outside of the classroom — and there, of course, comes the danger.
Yes, exactly.
In practice, Torres has found little evidence that effective altruists, rationalists, and longtermists have carried these particular thought experiments through to their conclusions.
Again, Yudkowsky is arguing in the pages of Time magazine that countries should be willing to engage in military strikes, even at the risk of nearly everyone on Earth dying, so long as “there’s still a chance of reaching the stars.” Heck, the entire AGI race is the direct result of the TESCREAL movement—and I think Brennan would agree with me that this race is extremely reckless and dangerous. Bankman-Fried, once again, cause profound real-world harm because of his utilitarian version of EA-longtermism. I myself have been on the receiving end of death threats and threats of physical violence from the EA community (very likely from a specific prominent figure within EA). So, there is plenty of evidence that thought experiments about trillions of future humans, working for “immoral organizations,” and so on and so forth, have caused actual harm in the world. (See also the book The Good It Promises, The Harm It Does for other examples.)
The more significant point, though, is that my critiques concern the ideologies themselves. These ideologies, I would argue, contain all the ingredients necessary to “justify” extreme actions. (One could say the same thing about the ideologies of the Anabaptists and Aum Shinrikyo.) Peter Singer himself made this point in writing that the EA-longtermist Holden Karnofsky
does not draw any ethical conclusions from his speculations [about this being the “Time of Perils”], other than advocating “seriousness about the enormous potential stakes.” But, as [Émile] Torres has pointed out, viewing current problems – other than our species’ extinction – through the lens of “longtermism” and “existential risk” can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth. Marx’s vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a “Thousand-Year Reich” was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior.
I am not suggesting that any present exponents of the hinge of history idea would countenance atrocities. But then, Marx, too, never contemplated that a regime governing in his name would terrorize its people.
No one has access to more people than there exist atoms in the universe, much less the ability to put dust specks in their eyes. 80,000 Hours, a nonprofit that provides career advice and conducts research on which careers have the most effective impact, has consistently advised against taking harmful jobs.
This is a relatively new development, so far as I’m aware.
Torres gives an example of an “evil organization” at which effective altruists recommend people work: the proprietary trading firm Jane Street. But Jane Street seems at worst useless. There are many criticisms to be made of a system in which people earn obscene amounts of money making sure that the price of a stock in Tokyo equalizes with the price of a stock in London slightly faster than it otherwise would.
Then maybe—and I bet you could get some EV figures to support this—the best thing to do would be to oppose capitalism, which also happens to be a root cause of the climate crisis and the AGI race that so many doomers are currently freaking out about.
But if someone is going to pay millions of dollars for people to do that, it might as well go to people who will spend it on medicine for poor children rather than to people who will spend it on a yacht.
But, as Nathan Robinson writes about this very issue:
You can of course see here the basic outlines of an EA argument in favor of becoming a concentration camp guard, if doing so was lucrative and someone else would take the job if you didn’t. But MacAskill says that concentration camp guards are “reprehensible” while it is merely “morally controversial” to take jobs like working for the fossil fuel industry, the arms industry, or making money “speculating on wheat, thereby increasing price volatility and disrupting the livelihoods of the global poor.” It remains unclear how one draws the line between “reprehensibly” causing other people’s deaths and merely “controversially” causing them.
It’s dumb to dump money from helicopters, but if someone dumps a million dollars in front of my house, I’m going to take it and donate it.
It’s true that Sam Bankman-Fried, an effective altruist Jane Street employee, went on to commit an enormous fraud — but the fraud was universally condemned by members of the effective altruist community.
To be clear, leading figures in the EA community have spread lies (by omission) about Bankman-Fried; see this article for details. They surely knew that he was flying in private jets, owned $300 million in Bahamian real estate, and so on. MacAskill was repeatedly warned about Bankman-Fried’s “unethical” behavior. Even more, no one batted an eye when Bankman-Fried described DeFi, which he invested and traded in, as a Ponzi scheme! No one in the EA leadership had a problem with Bankman-Fried until he got caught.
People who do evil things exist in every sufficiently large social movement; it doesn’t mean that every movement recommends evil.
Re: “it doesn’t mean that every movement recommends evil”—agreed? I don’t think I’m arguing that “every movement recommends evil”!
The most important thought experiment — in terms of the weight Torres gives it and how TESCREALists actually behave — is about trade-offs related to so-called existential risk: the risk of either human extinction or a greatly curtailed future (such as a 1984-style dystopia). While most TESCREALists are worried about a range of existential risks, including bioengineered pandemics, the one most discussed by Torres is advanced artificial intelligence. Many experts in the field worry that we’ll develop extraordinarily powerful artificial intelligences without knowing how to get them to do what we want. If a normal computer program is seriously malfunctioning, we can turn it off until we figure out how to debug it. But a so-called “misaligned” artificial intelligence won’t want us to turn it off — and may well drive us extinct so we can’t.
It’s worth noting here that, as I write in a forthcoming article for Salon, no group has done more to launch, sustain, and accelerate the AGI race than the leading “doomers” themselves! DeepMind got funded because of Yudkowsky’s Singularity Summit, and Jaan Tallinn was an early investor in the company. Tallinn later helped Anthropic get $124 million, contributing $25 million of this himself. There are a ton of examples. It’s one of the great ironies of the absurd situation we now find ourselves in.
People who are worried about risks from advanced artificial intelligence generally expect that it will come very soon. Models created by people who are worried about risks from advanced artificial intelligence generally predict that we’ll develop it long before 2100. No significant number of people are saying, “Well, I think that in 999,999,999 out of 1,000,000,000 worlds we won’t invent an artificial intelligence in the next two hundred years, but I’ve completely reshaped my entire life around it anyway, because there are so many potential digital minds I could affect.”
It’s true that TESCREAList philosophers often debate Pascal’s mugging arguments: arguments that you should (say) be willing to kill four people for an infinitesimal decrease in the risk of existential risk. But Pascal’s mugging arguments are generally considered undesirable paradoxes, and TESCREAList philosophers often work on trying to figure out a convincing, solid counterargument. But it’s convenient for Torres’ case to pretend otherwise.
I think the word “undesirable” isn’t doing much work here. My reading of the longtermist literature is that it tends to see such paradoxes as a mere annoyance. Some, such as Hayden Wilkinson, have literally defended “fanaticism,” and as an EA Forum article notes, “one of Greaves and MacAskill’s responses to this counterargument [from fanaticism] cites Hayden Wilkinson’s In Defence of Fanaticism, suggesting perhaps we should be fanatical on balance.”
Another example: Greaves, MacAskill, Ord, Beckstead, and others have argued in Utilitas that we should not take the Repugnant Conclusion too seriously—this conclusion being that a world full of huge numbers of people with lives that are barely worth living would be better than a world in which a much smaller number of people are extremely happy. These are very radical—fanatical—ideas that leading figures within EA-longtermism embrace. As it happens, I asked one of the most prominent value theorists in the world what they thought of the Utilitas paper, and he told me that it “has upset many of my philosopher friends. In my view, there is a somewhat desperate ring to their declaration, and, in all honesty, I do not understand what made them write it” (quoted with permission).
Many rationalists, effective altruists, and longtermists talk about a concept called “getting off the crazy train.” Rationalists, effective altruists, and longtermists don’t want to be the hypocritical ethics professor who talks about the moral necessity of donating most of your income to help the global poor and then drives home in a Cadillac. They also don’t want to commit genocide because of a one-in-one-billion chance that it would prevent extinction.
I have to say that this made me guffaw a bit, because I have not seen a single leading EA or longtermist say anything about the ongoing genocide in Gaza.
It makes sense to get off the crazy train at some point. Human reason is fallible; it’s far more likely that you would mistakenly believe that this genocide is justified than that it actually is.
But it’s difficult to pick any sort of principled stop at which to deboard the crazy train. Some people are bought in on AI risk but don’t accept that a universe with more worse-off people can be better than a universe with fewer better-off people. Some people work on preventing bioengineered pandemics and donate a fifth of their salaries to buy malaria nets. Some people work on vaccines while worrying that everything will be pointless when the world ends. Some people say, “I might believe we live in a simulation, but I don’t accept infinite ethics; that stuff’s too wild,” even though the exact distinction being made here is unclear to anyone else. And everyone shifts uncomfortably and wants to change the subject when the topic of how they made these decisions comes up.
But there’s one particular stop on the crazy train Torres worries the most about. They critique longtermism sharply:
According to the longtermist framework, the biggest tragedy of an AGI apocalypse wouldn’t be the 8 billion deaths of people now living. This would be bad, for sure, but much worse would be the nonbirth of trillions and trillions of future people who would have otherwise existed. We should thus do everything we can to ensure that these future people exist, including at the cost of neglecting or harming current-day people — or so this line of reasoning straightforwardly implies.
Or, as Peter Singer, Nick Beckstead, and Matthew Wage write:
One very bad thing about human extinction would be that billions of people would likely die painful deaths. But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations.
They ask, “If the ends can justify the means, and the end is paradise, then what exactly is off the table for protecting and preserving this end?” In short, TESCREALists are so in love with the idea of a far-off paradise that they are willing to sacrifice the needs of people currently living.
At first blush, it seems insensitive, even cruel, to prioritize people who don’t exist over people who do. But it’s difficult to have common-sense views about a number of issues without caring about future people. For example, the negative effects of climate change are mostly on people who don’t exist yet — and that was even more true in the late 1980s when the modern consensus around climate change was first coalescing. Should we tolerate higher gas prices now to keep an island from sinking underwater a century from now? After all, high gas prices harm the people choosing between dinner and the gas they need to get to work right now. Why not just pollute as much as we want and stick future generations with the bill?
I think this gets at why the entire population ethics tradition/longtermist framework is probably misguided. When longtermists talk about “caring about future people,” what they mean is very different from what most people mean, on a “common-sense” interpretation. I, personally, agree that if people suffer 1,000 years from now, that suffering counts for just as much as the suffering of people today. Hence, we should be thinking about the long-run consequences of current actions. I would add here that, if one believes there’s a good chance that people will exist in, say, 200 years—and I think this is probably true—then we should be concerned about their wellbeing, insofar as we think we can do something to affect them (the farther out in the future one goes, the more utterly clueless we become about how current actions might affect these people).
But longtermism goes way beyond this. Even the “moderate” (vs. “radical” or “strong”) version of longtermism that MacAskill defends in his 2020 book What We Owe the Future is built on the Total View—a position in population ethics that yields the Repugnant Conclusion. “Caring about future people,” for longtermists, isn’t just about ensuring that their lives are good if they exist. It’s about creating as many of these people as possible, to maximize “intrinsic value” in the universe, thus making the universe “better.” Historically speaking, utilitarianism (which consists of two components, one of which is the Total View1) emerged around the same time as capitalism, and hence I don’t think it’s surprising that they’re very similar: both want to maximize something without any limit—profit in capitalism, and “intrinsic value” in utilitarianism. Longtermism thus essentially reduces ethics to a branch of economics, and it’s this economic conception of ethics that leads to conclusions far, far beyond “we should care about future people”—that is, as most people, like myself, would understand that phrase.
There’s also an irony here that no one, except me and Rupert Read, have pointed out: longtermists don’t actually care about the long-term future as such. Imagine two worlds: in World A, the human population falls to 10 million people per century, and we survive for another 10,000 centuries (1 million years). That results in a total of 100 billion future people. In World B, the human population grows to 50 billion per century, but we go extinct after only 100 centuries. That results in a total of 5 trillion people. In a forced-choice situation, which world should one pick—the the longer or shorter future? Given the Total View, we should pick the shorter future (World B), all other things being equal.
Longtermism may rearrange our priorities, but it won’t fundamentally replace them.
Of course it could. If longtermism becomes the dominant view within EA—and it’s been trending in this direction—less and less money will be spent on animal welfare and global poverty. As Hilary Greaves points out in this interview, we often think of global poverty alleviation as the best way to help others, but “longtermist lines of thought suggest that something else might be better still.” Or recall Yudkowsky’s claim that nearly everyone on Earth should be “allowed” to die to prevent AGI from being built in the near future, since as long as there are enough survivors, “there’s still a chance of reaching the stars someday.”
Large effective altruist funders such as Open Philanthropy generally adopt a “portfolio” approach to doing good, including both charities that primarily affect present people and charities that primarily affect future people. Effective altruists are trying to pick the lowest-hanging fruit to make the world a better place. If you’re in an orchard, you’ll do much better picking the easily picked apples from as many trees as you can, rather than hunting for the tree with the most apples and stripping them all off while saying, “This tree has the most apples, and therefore no matter how hard it is to climb, all its apples must be the easiest to get!” Even if the long-term future is overwhelmingly important, we may run low on opportunities that outweigh helping people who already exist. (In fact, the vast majority of people in history were uncontroversially in this position.)
Further, the common-sense view is that, all things equal, things that are good for humanity in the short run are good for humanity in the long run.
Really? MacAskill talks about preventing economic stagnation by creating AGI or genetically enhanced baby Einsteins to keep the engines of the economy raring. Is that good for humanity in the short run? (Note that I strongly disagree with MacAskill’s ardent growthism, which by all accounts will make climate change much worse. Studies suggest that 2 billion human beings will become climate refugees by 2100, and another 1 billion may perish as a direct result of climate change. Growthism is a root cause of the climate crisis.)
Great-power war and political instability increase the risk of AI race dynamics or the release of deadly bioengineered pandemics. If humanity is going to face future challenges head-on, it would help if more of its members were well-fed, well-educated, and not sick with malaria.
I don’t think that everyone in the longtermist community would agree. As Nick Beckstead writes:
To take another example, saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards at least by ordinary enlightened humanitarian standards saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
I’m not saying that Beckstead has acted on this conclusion, but it is a pretty straightforward implication of radical longtermism. If someone were to take this version of longtermism seriously, then people in poor countries would suffer. Again, the main target of my critiques has been the ideologies themselves.2
Torres worries that longtermists would deprioritize climate change relative to other concerns. But to the extent that longtermism changes our priorities, it might make climate change more important. Toby Ord estimates a one in a thousand chance that climate change causes human extinction. If you’re not a longtermist, we should maybe prioritize climate change a bit more than we currently do. If you are a longtermist, we should seriously consider temporarily banning airplanes.
Present-day longtermists aren’t campaigning for banning airplanes, because they believe that other threats pose even larger risks of human extinction.
Is that the only reason?
The real disagreement between Torres and longtermists is about factual matters. If you believe that artificial intelligence might drive us extinct in 30 years, you worry more about artificial intelligence; if you don’t, you worry more about climate change. The philosophy doesn’t really enter into it.
I wrote about this issue in Truthdig here. I don’t find the proposed “kill mechanisms” for superintelligence killing off humanity to be all that plausible. That said, Gebru and I argue that AGI is an inherently unsafe technology, and hence my view is that we should not be trying to build it at all.
(For many longtermists, such as Bostrom, failing to build superintelligence would probably itself be an existential catastrophe. This partly explains why leading “doomers” have been disproportionately responsible for initiating and accelerating the AGI race.)
Torres hasn’t established that TESCREALists are doing anything extreme. Actions taken by TESCREALists that Torres frowns on include:
Participating in governments, foreign policy circles, and the UN.
Yes, definitely. As mentioned above, the world is full of “crazy-town” ideologies. The ones worth writing about—and fretting over—are those with enormous power.
Fundraising.
Giving advice to people about how to talk to journalists.
The EA community is very controlling!
Reaching out to people who are good communicators and thought leaders to convince them of things.
Following social norms and avoiding needless controversy.
No, my issue here is being manipulative and lacking integrity by following social norms because it’s instrumentally useful to get what one wants.
Trying to avoid turning people off unnecessarily.
All social movements do these things. It isn’t a dark conspiracy for a movement to try to achieve its goals, especially if the movement’s philosophy is that we should direct our finite resources toward doing the most possible good.
I don’t consider this to be a “conspiracy.” I just think that ideologies that combine a utopian vision of the future with a broadly utilitarian mode of moral reasoning are inherently dangerous (history proves this!). When those ideologies gain power, I start to worry.
Torres has received death threats and harassment. I — like any minimally decent person — condemn death threats and harassment wholeheartedly.
These threats have specifically come from people in the EA community. Recall from above that some have also explicitly talked about killing AI researchers as a strategy to prevent the AGI apocalypse.
But harassment is an internet-wide problem, particularly for women and nonbinary people. If harassment were caused by TESCREAList extremism, people wouldn’t be sending each other death threats over not liking particular movies. If even one in ten thousand people thinks sending death threats is okay, critics will face death threats — but it’s unreasonable to hold the death threats against the 9,999 people who think death threats are wrong and would never send one. No major or even minor thinkers in effective altruism, transhumanism, the rationalist movement, or longtermism support harassment.
I disagree about the last sentence: I do think that there are major EA thinkers who are okay with harassment, though I won’t elaborate on this here. However, I take Brennan’s overall point.
Torres is particularly concerned about TESCREALists cavalierly running the risk of nuclear war. They criticize Eliezer Yudkowsky for supporting a hypothetical international treaty that permits military strikes against countries developing artificial intelligence — even if those countries are nuclear powers and the action risks nuclear war.
But almost any action a nuclear power takes relating to another nuclear power could potentially affect the risk of nuclear war. The war in Ukraine, for example, might increase the risk that Vladimir Putin will choose to engage in a nuclear first strike. That doesn’t mean that NATO should have simply allowed the invasion to happen without providing any assistance to Ukraine. We must trade off the risk of nuclear war against other serious geopolitical concerns. As the world grows more dangerous, our risk calculus should include the dangers posed by emerging technologies, such as bioengineered pandemics and artificial intelligence.
I think the point is that the “existential risks” from superintelligence are highly speculative. They are based on arguments that involve a lot of moving parts—the Orthogonality Thesis, the Instrumental Convergence Thesis (including the idea of recursively self-improving AI systems), the Value Fragility Thesis, etc.—all of which are potentially flawed in one or more ways. See this and this article for examples. When one is calling on governments to risk a thermonuclear holocaust for hypothetical, speculative risks, one must be very careful!
That said, I also think that there are well-grounded scenarios in which advanced AI causes modern civilization to collapse. Here’s an example: AI systems enable the generation and dissemination of climate disinformation around the world. Consequently, it becomes impossible to prevent catastrophic climate change, which—if one takes climate scientists seriously—could very well lead to civilizational collapse. In The Social Dilemma, Jaron Lanier says that he sees the risks of social media to be genuinely “existential” for civilization—I would not only agree with that, but say the same thing about advanced AI. No need for talk of atmospherically self-replicating “diamondoid bacteria” (Yudkowsky) or mosquito-sized nanobots that “burgeon forth simultaneously from every square meter of the globe” (Bostrom). I wish the conversation about AI risks would focus on more concrete and plausible, though less “sexy,” scenarios like the one outlined above.
We shouldn’t engage in reckless nuclear brinkmanship, but similarly we shouldn’t be so concerned about nuclear war that we miss a rogue country releasing a virus a thousand times more deadly and virulent than COVID-19.
Torres’ implication that only TESCREALists think this way is simply false. Eliezer Yudkowsky’s argument is no different from calculations that have been made by policymakers across the globe since 1945. If anything, longtermists are more cautious about nuclear war than many saber-rattling politicians for the same reasons they care more about climate change. For example, 80,000 Hours characterizes nuclear security as “among the best ways of improving the long-term future we know of,” although it’s “less pressing than our highest priority areas.”
Torres themself supports a moratorium, perhaps even permanent, on research into artificial intelligence. I have no idea how they believe this would be enforced without the threat of some form of military intervention. Lack of intellectual honesty about the costs of your preferred policies is not a virtue.
I don’t think it’s intellectual dishonesty! Frankly, I have no idea how this would work in practice—and I mostly blame the TESCREALists, including the “doomers,” for creating the terrible mess that we now find ourselves in with respect to the AGI race. I really believe that if it weren’t for them, we very likely wouldn’t even be talking about “AGI” right now, which is yet another reason that I see TESCREALism itself as a profound danger to humanity. But I digress.
Paradoxically, although Torres believes that TESCREALists make a trade-off between the well-being of present-day people in the name of speculative hopes about the future, the policies Torres supports involve far more wide-ranging and radical sacrifices. They write:
[I]f advanced technologies continue to be developed at the current rate, a global-scale catastrophe is almost certainly a matter of when rather than if. Yes, we will need advanced technologies if we wish to escape Earth before it’s sterilised by the Sun in a billion years or so. But the crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it.
The solution? For us “to slow down or completely halt further technological innovation.” In a different article, they call for an end to economic growth and to all attempts to “subjugate and control” nature.
To be clear, I think that (a) the enterprise of technological development almost certainly can’t be stopped, and (b) that if this enterprise continues, the result will be an unprecedented global catastrophe. This is precisely why I’m so pessimistic about the future—it’s a major reason why I don’t, and won’t, have children! I think we’re in a really bad situation that we probably can’t escape.
It’s possible that Torres is phrasing their beliefs more strongly than they hold them.
I do hold these beliefs that strongly!
Perhaps they simply believe that we should avoid developing new technologies that pose an outsized risk of harm — a wise viewpoint originally developed by TESCREAList and philosopher Nick Bostrom.
My view is very different from Bostrom’s. As noted above, Bostrom thinks that failing to develop super-dangerous advanced technologies would itself be existentially catastrophic. (This is the whole reason that Existential Risk Studies was born: to understand and then mitigate the risks so we can create utopia via these technologies.)
In contrast, I think technology has made the world far worse in general, partly by introducing a flurry of unprecedented hazards to humanity. As I explain in a forthcoming academic article, as well as a forthcoming article for Truthdig, I think the world has never been worse than it is today all things considered, because I believe that suffering should be counted in absolute rather than relative terms, and there has (undeniably?) never been so much total human suffering than right now. This isn’t just me insisting on the point, either—I have compiled a very comprehensive list of human suffering, and it is mind-boggling: e.g., 50 million people in modern-day slavery, 800 million (more than 1/3) of the world’s children have lead poisoning, 1.2 billion in acute multidimensional poverty, ~50 million Americans living with chronic pain, some 500,000 are murdered each year, and so on and so forth.
But let’s say that Torres means what they say. Then let us be clear about the consequences of ending technological innovation, economic growth, and the control of nature. Throughout the vast majority of human history, only half of children survived to the age of 15; today, 96% do. Because of the Green Revolution and global transportation networks, for the first time in history, famine happens only if a government is too poorly run to take the simple steps necessary to prevent it.
No—famine was not something that our hunter-gatherer ancestors ever really encountered. It is a consequence of large sedentary communities plus agriculture. As for the Green Revolution, it has massively contributed to the environmental degradation that now threatens the biosphere, and could lead to civilizational collapse later this century (or the next). The costs of the Green Revolution and similar developments have been truly profound. From 400+ dead zones around the world to ocean acidification (happening 4.5 times faster than during the Great Dying) to 69% decline in the global population of wild vertebrates since 1970, etc. Yes, we managed to avoid mass death in the 1970s and 1980s, but only at the cost of jeopardizing our entire future on Earth in the coming centuries.
The only solution anyone has discovered for an effective end to poverty is economic growth. Before the Industrial Revolution, all but a tiny minority of elites lived in what we would currently consider extreme poverty.
I disagree. First, as the anthropologist Mark Cohen writes, “some of our sense of progress comes from comparing ourselves not to primitives [a dated word that I wouldn’t use] but to urban European populations of the fourteenth to eighteenth centuries. We measure the progress that has occurred since then and extrapolate the trend back into history.” This “progressivist” narrative from the Enlightenment is not correct. See Marshall Sahlins’ “The Original Affluent Society,” and the large literature that grew out of it, including the recently published Dawn of Everything. Extreme poverty—which has actually been growing around the world in recent years—is a problem that we created.
Many disabled people rely on technology for their survival. If we end all attempts to control nature, innumerable disabled people will die, from people who need ventilators to breathe to premature babies in the NICU. I take a daily pill that treats the disease that would otherwise make my life unlivable; it costs pennies per dose.
These are cases where I very much do advocate for technology. Though I don’t have a completely worked-out view on the matter, I would argue that the sort of “megatechnics” that define our current era aren’t necessary for technologies that help, e.g., individuals with disabilities. There could be “small-scale” technological systems that enable all people to thrive without risking the collapse of every society around the world.
Worth noting, as well, that one of the leading EAs and former longtermist—Peter Singer—is a eugenicist who’s explicitly called for infants with certain disabilities to be murdered. As Singer and a coauthor write in their book Should the Baby Live?, “this book contains conclusions which some readers will find disturbing. We think that some infants with severe disabilities should be killed.”
My six-year-old son has all human knowledge available at his fingertips, even if he mostly uses it to learn more about Minecraft. Due to our economic surplus, an unprecedented number of people have the education and free time to develop in-depth opinions about philosophical longtermism.
But we had more free time ~10,000 years ago. Hunter-gatherers probably “worked” less than we do, w—and for the very same reason that we now have “all human knowledge available at [our] fingertips,” most of us end up working constantly, leading some scholars to introduce the word “weisure” time (a portmanteau of “work” and “leisure”) to describe our lifestyles these days. It’s brutal—we almost certainly live in the most stressed-out, loneliest, etc. societies in all of human history. Most of us don’t appreciate this fact because we don’t have good points of reference to see just how pathological modern life is.
Technological progress continues to benefit the world. To pick only one example, since 2021, when Torres called for an end to technological innovation, solar technology has improved massively — making solar and other clean energy technologies one of our best hopes for fighting climate change.
Right, but this is a solution to a problem that we ourselves created. Extending Cohen’s remark above, I think a lot of our sense of “progress” comes from creating problems and then fixing them with science and technology. That gives us a feeling of forward movement, when in fact it’s just taking a step in one direction after taking a step in the exact opposite direction. There are “disease of civilization” that have been cured—but without civilization, we wouldn’t have needed a cure in the first place. We invented braces to straighten out our teeth, but people in the past almost always had straight teeth! Even cancer might be “a modern, man-made disease caused by environmental factors such as pollution and diet.” Again, so much of this is making a mess, cleaning it up, and then declaring: “Progress!!”
And while large language models get the headlines, most inventions solve the boring problems of ordinary people, as they always have: For example, while traditional cookstoves are a major cause of indoor air pollution, we have yet to develop clean cookstoves that most developing-world consumers want to use. Technology matters.
See above.
For all their faults, TESCREALists usually have a very concrete vision of the future they want: interstellar colonization, the creation of nonhuman minds that transcend their creators, technology giving us new abilities both earthshattering (immortality!) and trivial (flight!). Torres’ vision is opaque at best.
Torres talks a lot about deliberative-democratic institutions and Indigenous wisdom. They call for “attunement to nature and our animal kin, not estrangement from them; humility, not growth-obsessed, technophilic, rocket-fueling of current catastrophic trends; lower birthrates, not higher; and so forth.” But they give few specifics about what they think a society marked by attunement to nature and humility and Indigenous wisdom would look like. Specifics about Torres’ ideal world, I think, would raise questions about what happens to the NICU babies.
I think this is fair. One of my main goals at this point in my career is to figure out a compelling alternative vision of the future. For the most part, I’ve been “philosophizing with a sledgehammer”—tearing down (or at least trying to tear down) views that I think are problematic and dangerous, but doing little to offer something in their place. So, Brennan is right about this!
Torres’ disagreement with TESCREALists is not about whether to care about future people, which they do.
But see my points above: what I mean by “caring about future people” is very different from what TESCREALists mean, whereby (a) one way to “benefit” future people is to bring them into existence (so-called “existential benefits”)—which I reject; and (b) we must create as many future people as possible, and hence must colonize space and build “planet-sized” computers to run virtual reality worlds full of 10^58, or whatever, digital people—which I also reject.
It isn’t about whether we should sacrifice the well-being of current people in the hopes of achieving some future utopia: Although Torres criticizes utopian thinking, they engage in it themself. It isn’t even about what measures are acceptable to achieve utopia; Torres achieves moral purity through refusing to discuss how the transition to their ideal society would be accomplished.
My view definitely isn’t “utopian.” Perhaps its “protopian,” in the sense of Monika Bielskyte. I don’t know. But I do know that it’s not about creating a “utopia.”
It is entirely and exclusively about what the utopia ought to look like.
Many people find the TESCREAList vision of the future unappealing. The discussion of how we should shape the future should include more opinions from people who didn’t obsessively read science fiction novels when they were 16. But Torres’ critique of TESCREALism ultimately comes from an even more unappealing place: a complete rejection of technological progress.
Indeed, I would mostly object to the word “progress” here. I am very glad that infant mortality rates are much lower than long ago, but technology has also caused or enabled truly unfathomable amounts of human misery. Again, there has probably never been so much human misery on Earth than there is right now. Technology is also almost entirely responsible for our abysmal existential predicament, as longtermists themselves would concur. It is difficult—I would even say laughable—to talk about “technological progress” when, according to Bostrom, there’s a 20% chance of human extinction before 2100. How is that “progress”?
Torres can dismiss all TESCREALists out of hand because Torres is opposed to economic growth and even the most necessary control of nature.
I’m not opposed to all control of nature. See above.
Everyone else has to consider specific ideas. How likely is it that we’ll develop advanced artificial intelligence in the next century, and how much of a risk does it pose? What international treaties should we make about dangerous emerging technologies? Where should you get off the crazy train? These questions are important — and Torres’ critiques of TESCREALism don’t help us answer them.
But these critiques do foreground the fact that TESCREALism itself is hugely dangerous and built on philosophically flawed foundations. I think we can—and must—do better than TESCREALism, though I don’t have all the answers about which alternative futurological vision is better. Degrowth is clearly part of it, so is social justice. Beyond that, I’m not sure yet.
(Note that Brennan also has a Substack article responding to my criticisms of transhumanism as a form of eugenics, here.)
This is the “axiological” component of utilitarianism. The “deontic” componet just says: “Whatever maximizes value is then what you ought to do.”
Though I have also criticized the community, as well.