The futures that futurists imagine are what I envision: Yet more evidence living without polluting or depleting isn’t regression
People consistently refer to life without polluting and depleting as returning to the Stone Age. The lack of historical awareness and imagination staggers the mind, or it would mine if I didn’t realize that humans fill our minds with lies, or legitimizing myths, when we are induced to act against our values, to mollify our feelings.
I was listening to William MacAskill (bio below) on Sam Harris’s podcast. I’m not sure they would describe themselves as futurists, but they aren’t far from it and in the part of the conversation I quote below, they were exploring what if artificial intelligence worked out.
I quote several minutes of the conversation and share the audio, but I’ll call attention to a few comments of what they imagine life would be like if AI works purely benignly, so no one has to work. What they describe is close to how egalitarian hunter-gatherer people live today.

MacAskill speculates:
the large work, the tedious, boring, unpleasant work that most people do, most people hate their jobs. Instead, we could be devoting that time to art and creativity and relationships and understanding of the universe. We may well look back at that as just like a sort of wage slavery or something.
The two of them continue. As far as I can tell, they can’t tell that they’re describing how humans lived before agriculture but they are.
Sam: So that’s my final question: In this condition of abundance where AI has obviated the need for any human, not just drudgery, but any obligate human work, what do you imagine people spending their time doing?
I mean, like we’re all aristocrats. We can all just enjoy our leisure. Many people are worried about the kind of loss of human meaning that is currently anchored to productive work.
I worry less about that, I think, than some people. But what I’m picturing is a kind of revenge of the humanities where… Now no one is saying learn to code, right? That seems to be in the rear view mirror.
Do you just imagine a future where it’s all about art appreciation. fun games, and the amusements of increasingly refined culture that are, again, I think as I imagine them, more on the humanities’ side of the quad than on the STEM side of the quad.
Will: Yeah, I think that’s right. I think to see the future, look at retirees, look at kind of the rich who don’t need to work. What do people do?
We’ll spend a lot of time with friends and family, engage in creative pursuits, art or music, learn languages, travel the world, garden, play games.
And remember that there will be endless new opportunities as well as the old, where the games will be able to play will have been amazing, interesting, far better designed than any games we have today.
Or if you want to learn, you’ll get the most optimized education for these new scientific breakthroughs, we’ll, perhaps before too long, really understand the laws of the universe.
I consider it tragic how much we’ve bought into our lives being great for reasons that don’t mean our lives are great.
Meanwhile, wonderful lives are available if we stop violating our own values and Constitution by polluting and depleting.
Here’s the original audio.
Show notes:
Sam Harris speaks with William MacAskill about effective altruism, AI, and the future of humanity. They discuss the post-FTX recovery of the EA movement, global health and pandemic preparedness, the limits of quantifiable ethics, the intelligence explosion, risks of concentrated AI power, what a post-scarcity world might look like, and other topics.
Will MacAskill is an associate professor of moral philosophy at Oxford University, and author of Doing Good Better, Moral Uncertainty and What We Owe The Future. He cofounded the nonprofits 80,000 Hours, Centre for Effective Altruism, and Giving What We Can, and helped to launch the effective altruism movement, which encourages people to use their time and money to support the projects that are most effectively making the world a better place.
The text of that part of their conversation, for reference:
Sam: Now we have robot cars and they’re obviously better than we are. They’re slowly getting rolled out. But at a certain point, we are going to look back upon the past where we had this ambient level of carnage year after year as just pointless destruction of human life. We’re so glad to be rid of that. We wouldn’t think of taking the wheel again and almost everything else is like that.
I mean, the fact that we haven’t yet cured cancer, whatever that cure is, it’s presumably within reach. It wouldn’t violate the laws of physics to cure cancer presumably. Once we have the cure or cures, even if there might have to be hundreds of them, delivered by AI, we’re going to look back at the legions of people who were killed or otherwise immiserated by cancer as just being so unlucky to have been born too early.
Everything, all of these contingent facts about us, are just that they’re contingent and these contingencies could be removed every last one of them by more intelligence.
Will: Absolutely. Similarly, the large work, the tedious, boring, unpleasant work that most people do, most people hate their jobs. Instead, we could be devoting that time to art and creativity and relationships and understanding of the universe. We may well look back at that as just like a sort of wage slavery or something.
Sam: So that’s my final question: In this condition of abundance where AI has obviated the need for any human, not just drudgery, but any obligate human work, what do you imagine people spending their time doing?
I mean, like we’re all aristocrats. We can all just enjoy our leisure. Many people are worried about the kind of loss of human meaning that is currently anchored to productive work.
I worry less about that, I think, than some people. But what I’m picturing is a kind of revenge of the humanities where… Now no one is saying learn to code, right? That seems to be in the rear view mirror.
Do you just imagine a future where it’s all about art appreciation. fun games, and the amusements of increasingly refined culture that are, again, I think as I imagine them, more on the humanities’ side of the quad than on the STEM side of the quad.
Will: Yeah, I think that’s right. I think to see the future, look at retirees, look at kind of the rich who don’t need to work. What do people do?
We’ll spend a lot of time with friends and family, engage in creative pursuits, art or music, learn languages, travel the world, garden, play games.
And remember that there will be endless new opportunities as well as the old, where the games will be able to play will have been amazing, interesting, far better designed than any games we have today.
Or if you want to learn, you’ll get the most optimized education for these new scientific breakthroughs, we’ll, perhaps before too long, really understand the laws of the universe.
Sam: But what do you think, take like art appreciation and all of that? The provenance of this art, are we going to care whether it’s human made? Specifically, do you imagine a world where we don’t care about the human origin of music, but we very much care about the human origin of literature.
Is it going to break down in ways that are counterintuitive or are we just not going to care in the end?
Will: I think it’ll be a mix. Consider music.
We’ve already automated music in that, if I’m deciding how to spend an evening, I could listen to any song ever made with this amazing quality of sound, perfect sounds, recording, and so on.
I do that a lot. In fact, it enhances my life. From my point of view, “how is it made?” I don’t really care.
However, I also go to live music. In fact, in my house, in our living room, we hosted a little gig just last week. While there, I was just thinking about the quality of the music. It’s a local musician. It’s obviously not the best possible music that could have been made.
But overall experience has nothing to do with music and everything instead about human connection and community. I think preferences for that will persist, so I expect that for literature, for music, for art, we will have an enormous abundance of little boutique, beautiful experiences and people will continue to have a preference for that.
Sam: So you’re talking about the kind of the primacy of real world experiences as opposed to media?
Will: Well, both real world, but also human, so if instead we’d had the local robot come in and
Sam: play the guitar better than Jimmy Hendrix?
Will: I don’t think it would have had the same vibe. I don’t think it would have done even if that robot were indistinguishable. Maybe we would also do that because maybe it’s also fun to do that, but my guess is that at least a long time, there will be both demand and supply for these sorts of boutique tailored local, meaningful interactions between humans and art and creativity.
Sam: What do you think is going to be the consequence of the AI—this is now a near term question—the consequence of the AI sloppification of everything, deep fakes, and the inability to at a glance distinguish whether a video is of some real event or it’s fake.
Do you think… I’m half expecting that this may actually be both a disease, but a kind of a cure for the fundamental disease that ails us because I feel like we could suddenly declare epistemological bankruptcy with respect to the internet and social media and begin to feel that we’re not going to react to or even be interested in imagery unless something like traditional gatekeepers vet it and tell us that it’s real.
I just noticed what’s happening to me. I’m not on social media very much because I deleted my personal account, but occasionally someone will send me an Instagram video and I’ll look at that and then a new video will be populated.
Sometimes I’m noticing a video, many of these are like nature videos where what purports to be a nature video—a woman is in her backyard with her little dog and a grizzly bear comes over the fence and her dog saves her from being attacked by the grizzly bear. I’ve seen a few of these videos where they look perfect. This really looks like an encounter with a backyard with a grizzly bear and a heroic dog, and it’s so obviously AI because it’s just too perfect and it’s just too far-fetched.
I realized that I’ve watched like five or six of these videos and now I have zero interest in anything like this, of seeing this ever again. I feel like, strangely, we might be getting dragged back to a previous condition where the gatekeepers have a lot of power and you’re not going to care about anything you saw the president say in a clip, however, that clip gets to you—unless it’s coming through some vetted channel or vetting channel like the New York Times or Getty images or something that you trust for whatever reason.
Will: Yeah, I think there’s a couple of phenomena. Often with deep fakes and so on, people worry that people believe all sorts of false things now, but actually what happens in equilibrium is just people change their minds even less.
It’s already very hard to change people’s minds about anything, but if it’s now the case that if you don’t know if something’s reliable, and in fact, if something disconfirms your worldview, you have this easy debunking argument, which is, “oh, it’s AI,” generally, “it’s not even real,” then I think that will mean people change their views even less.
Sam: Right.
Will: There is a possible counter, which is AI systems themselves, where, at the moment we are in this golden age that I am worried will come to an end where you can just ask AI things and it’s pretty reliable, like more reliable than humans.
Read my weekly newsletter
On initiative, leadership, the environment, and burpees