Make Yourself Human Again
The speaker examines four major ideological positions on AI—denialism, optimism, safetyism, and accelerationism—and finds each wanting. Denialism ignores mounting evidence that artificial general intelligence is achievable. Optimism fails to recognize that AGI, unlike other technologies, cannot be controlled because it can act autonomously beyond human strategic capabilities. Safetyism’s endgame—a "singleton" AI that controls the world to prevent unsafe AI—is both totalitarian and likely impossible given fundamental limits on proof systems. Accelerationism, while sophisticated in its arguments about thermodynamic gradients and evolutionary progress, asks humans to surrender their agency for abstract narratives beyond their control. The speaker’s conclusion: what we fear about AI is that it will embody the dangerous, open-ended agency we have denied ourselves. Rather than worshipping or fearing AI, we should focus on developing our own moral agency and "make ourselves human again."
Transcript
It’s gonna be a little rough. I’ll try to stay in time.
Yes, so I’m neither a Mormon nor a transhumanist, but where else are you going to find a community of people who can swim in the deep end of AI and theology? So let’s hope for the best here.
So this past fall, as Carl mentioned, I was the editor of Palladium Magazine. I wrote for and edited Palladium’s 12th Print edition on the essentially theological questions around AI.
So, this is sort of an unusual approach. Most people think of it as a scientific or an engineering problem, but As a political philosophy magazine, we weren’t too interested in those aspects of it. We were interested in the ideologies that people have around AI. And these ideologies concern things like the end of the world, the destiny of life, the nature of man. In other words, they are theological ideologies. And so we wanted to explore it from that perspective. These thoughts come out of that.
So theology doesn’t always come to us in traditional garb. Artists like to imagine the clouds parting. You get the big bearded Sky Father coming out, bellowing forth some proclamation. But in this study, I was looking for the voice of God in other places, in the subtle technical constraints on what Kinds of life are possible, what kinds of order are possible. Sometimes the voice of God comes to us in the mysterious frustrations of technical ambitions. Of things that we don’t normally regard as having any moral content at all. I think we have to look closely at that kind of stuff and really understand what they mean. And that’s what I’m trying to do here.
So I have three examples of that. Before I get into the main structure of this talk, the first is the laws of thermodynamics. Sadi Carnot came up with this method to show that Heat engines could not be arbitrarily efficient. This actually implied a world of strictly limited potential. It became what we now call the laws of thermodynamics, second law of thermodynamics. These are very significant. ideas that have cosmic implications.
So the second is Hilbert’s program to solidify and prove the consistency of the foundations of mathematics. This failed with Gödel’s incompleteness theorems, which show that no system can be no sufficiently powerful system can be assured of its own consistency. And this blew up much of the Enlightenment dream. Of rational reason.
Some of the moral significance of Darwin’s work, the third example on the origin of species, was recognized right away, but All three of these actually have profound cosmic moral significance that has not been fully recognized. And so we have to ask: what did God mean by this? Theology is largely given up trying to make sense of natural revelation, and unfortunately, the secular world has no vision for theology, so it’s kind of an open space to think in.
In AI discourse, all three of those big ideas are central. Thermodynamics, proof systems, and evolution. And so you’ll hear accelerationists talking in reverent terms about laws of thermodynamics. You’ll hear Safetyists talking in urgent terms about proof systems that they need to create their utopias. So Let’s keep that in mind as we go through these polls of ideological discourse.
So I identified four major poles of Ideology in AI. As far as I can tell, most people fall into one of these, and at least these are the well-established sort of sophisticated views or that can be sophisticated. They are denialism, optimism, safetyism, and accelerationism.
Denialism is the null hypothesis on AI. It’s uh specifically to make it specific, they reject The importance of John McCarthy’s conjecture that a computer program can be made to simulate every aspect of learning and intelligence that humans are capable of, including self-improvement. This isn’t a cherry-picked definition. This has been basically the foundation of the entire AI field. This is the vision that AI constantly comes back to: is how can we make it more of an agent, more capable more human in its capabilities.
The best reason I’ve heard to deny this conjecture is that it’s just conjecture. We don’t really have definitive knowledge that it’s possible. We don’t have a strong theoretical reason to believe that it’s possible. However, the weight of circumstantial evidence, as I’m sure most of you agree, is fairly strongly indicating that McCarthy was right about this. Since he said that, quite a lot has happened. Many people disagree, but I haven’t really seen them come up with anything convincing of a specific worldview in which AI would not be possible. It’s a lot of general skepticism. So in the absence of that, I have to conclude that that’s untenable. Denialism is not workable.
So, I’m going to be going through each of these ideologies, and I’m going to say why I don’t believe it. I think all four are wrong.
So optimism. A common view is that AI will be a great boon for mankind. like all previous technologies, it will be useful, we’ll be able to use it, it will be largely good.
There’s just one problem with this. AGI is not actually a technology What I mean by that is that technology tends to be tools. AGI, by definition, is capable of acting and thinking autonomously. It can escape control. And so You might try to you can’t control it any more than you can beat stockfish at chess because it’s capable of acting outside of your Acting beyond your strategic capabilities.
So you might try to design it with some semblance of human values, but what happens when a disagreement arises? You’re not going to be able to out-compete it or defeat it, it’s going to out-compete you. And that’s this sort of crucial problem with the optimistic view, is that you now are unleashing this thing on the world that you actually can’t control, that will go in a totally different direction, has totally different material requirements and results from the human world.
So humanity’s place in the world is ensured by our monopoly on intelligent agency. But if AI overturns that, we lose our place in the world.
So I hope that someone can produce stronger arguments for optimism than I’ve seen, but I have not seen them. Instead, I’ve seen a lot of just kind of chilling. financially interested vibing. It’s it’s I conclude that it’s untenable. AGI is existentially deadly.
Safetyism. This is where I think things get interesting. The safetyists want to respond to the existential threat of AGI with technical and political means of control. So I was in this camp for a while. If you could actually do that, that would be great. How does that actually look in the endgame?
Now, if you really dig into it, this is not talked about very loudly, but if you dig into it, the big endgame dream of safetyism is the singleton, which is an AI system that takes over the world and prevents the development of unsafe AI systems. And there really isn’t any better story about how we get safe AI other than that. And that’s kind of worrying from on both ends. Either that’s crazy and it’s not going to happen, in which case AI is not safe, or that’s crazy and that’s totalitarian and that’s not safe.
So there’s a bunch of arguments sort of Well, I’ll just skip over some stuff in the interest of time here. But basically That would require those advances in proof theory to become possible. And without Really, being able to construct a system that’s able to prove its own consistency without being able that can really have that level of coherence that Dodel may have showed is not possible. We have no reason to believe that you could construct an AI system sufficiently stable that it wouldn’t just Blow up or go off in some other weird, unsafe direction.
So there’s a lot of indication, unlike AGI, we have ourselves as examples of AGIs. You know, we are or not an artificial, but a general intelligence, we have that natural existence proof. But we have no such thing for alignment or stability. That would be required by the safety program. And so I think the safety program is actually just kind of impossible.
And if I can get theological for a minute, what does it mean that it’s impossible? What are they actually trying to do? They’re trying to Usurp natural law or usurp God’s plan for the world with this created thing and It really looks to me like reality is set up to prevent that. There’s so many reasons why that can’t happen. The mathematical reason is one of the most obvious, but I think there’s a lot of them. And it seems like it’s something that just isn’t allowed, can’t be allowed. I put it succinctly as God hates singletons. It’s not going to happen. nature cannot be overthrown.
So but without a benevolent singleton, I don’t see how any victory for safetyism could actually happen. You get a slowdown, maybe. You can control it a little bit. It’s temporary relief. Civilization will inevitably end up back in that very dangerous place of developing dangerous AI that can’t be controlled. So that’s unfortunate. And I’m not the only one to say this. In fact, some of the top safetyist programs have said this. They have given up. Their plan is to die with dignity. So it’s a little worrying on the safetyism front.
So that leaves us with the final boss of AI discourse, which is accelerationism. Accelerationism says basic it’s the most explicitly theological position, interestingly enough. It’s the most sophisticated. Its arguments are, in my opinion, mostly correct. However, it’s subtle, and again, I believe it’s wrong.
The short version of accelerationism is that God demands the replacement of human civilization with some superior AI civilization. And that’s it’s sort of beyond our sphere of agency as mortals to step in the way of this. All we can do is get with the program That’s again a worrying view.
The problem is, few people seem to have actually read the source material on accelerationism. This source material is, I think, most Cogently, Nick Land. So he’s this British philosopher known for his extreme methods and extreme conclusions. But he’s very explicit that nothing human makes it out of the near future. His worldview is interesting.
I can’t do it justice in a few words in the time we have, but basically it’s this hyper-Darwinistic worldview in which all progress in the world comes from a brutal competition between agents. striving to do whatever they do, but they most importantly achieve material advantage over each other and develop new technologies, new ways. of being and this is the process of evolution generalized through technological civilization. And he thinks this is the sort of fundamental story of what’s going on. And even the act of trying to intelligently think this through already makes you a partisan of this because intelligence is this open-ended thing that always threatens to revolutionize your worldview and revolutionize your world.
Unfortunately, his thoughts are too complex to summarize here, but he’s very sophisticated and hard to refute. And I think it’s very much worth reading. He developed a lot of this stuff in his blog, Xenosystems, from 2012 to 2017. Very much worth reading.
So he preaches the inevitability of what looks to us like an AI apocalypse, as society is sort of pulled into the future by these thermodynamic gradients And he says there’s nothing you can really do against it, and if you were to go against it, you’d be basically taking the side of stupidity as your moral principle. So the scary thing again is how sophisticated he is.
But unfortunately, the pop accelerationists who crib his work are not quite as sophisticated They water down his work, and the punchline from them is basically that you should work tirelessly to accelerate things. They take accelerationism as a verb, you should accelerate. You should sort of accelerate technological progress, AGI in general, regardless of whether it has anything to do with your interests. So I think this is wrong, obviously, and but it’s subtle.
What exactly is wrong with it? Why is it wrong if this is actually the will of God? And Nick Land makes a pretty good case.
So I think what’s going on here in the big picture is the failure of humanism, the failure of humanism to have a convincing answer to AI. Humanism emphasizes the open ended agency, value and potential of the human being. But AI threatens to unseat us as the unique bearer of that potential to create competition for that niche. What’s left for the human in the world with AI? There’s going to be AI systems that can control and do agency and do spirituality and do theology just as well as we can. or do love, or whatever thing you think is among the intellectual capabilities of mankind, if John McCarthy is right, AI will be able to do it. So what does that mean for us?
According to the other ideologies, we have unfounded hope, desperate reaction, and despair. That’s my summary of their teachings.
But what is the human? I think Rachel asked this question as well. Is the human just being a featherless biped, or is the human? Our moral agency is that our ability to think our way out of limitation and out of falsehood, to do philosophy, to have moral agency in the world. This, I think it’s that. I think this is the kind of being that God created. The universe to bring forth. This is the kind of being we are supposed to be. This is when we look at these subtle ways that God speaks. This is what I hear. And this is the kind of being that creates the evolutionary progress that Nick Lamb talks about, and is created by that.
So in that light, we can reframe what we fear about AI. We’ve we fear that AI will be more human than us. And that we will be out-competed on that basis. It will be better at philosophy, it will be better at thinking its way out of our petty moralism, it will be better at planning beyond our systems of control. It will judge us and perhaps defeat us the same way we have judged and defeated those who came before us. And AI is dangerous and unsafe because the human in its proper form is dangerous and unsafe.
So it’s not humanity and agency that’s threatened here. It’s actually just your humanity and your agency that’s threatened. It’s not the death of humanism, but that hum Munism threatens to leave us behind.
So, but I noticed that my humanity and your humanity is not directly yet threatened by AI. No machine has come to out-compete us or force us into a compromised position. But we are threatened by ideas like accelerationism and safetyism that tell us to give up our agency for these abstract potentialities that have nothing to do with our own interests. It’s all well and good that acceleration is the opinion of God, but does that mean that I should do it? That doesn’t follow. And even within the accelerationist worldview that Nick Land puts forth, or that you can generalize from many different ways of thinking about natural law. The thing that we’re actually called to do, it seems to me, is to be those beings that have that agency and force our own way in the world. and not just serve these large narratives that are entirely beyond us, that we actually can’t even control.
So if we fall for these ideas, we become basically programmed work units for someone else, someone else’s interests. We become slaves. we become less than human. So I don’t like that.
I don’t like that our agency in humanity is threatened by the ideological strictures of the aspiring singletons that Seek to align us with the popular but historically aberrant ideas of the day. I think that the dangerous moral powers that we fear in AI are the powers that we have conventionally denied to ourselves. What we fear is that the AI will somewhere escape the stifling moral consensus, the political consensus, the systems of control that we fa placed ourselves under. And we, in that fear, we implicitly acknowledge that what we have built for ourselves to contain ourselves, to align ourselves, is fake and against nature. that it will be overthrown by some dangerously open-ended return to nature that AI represents. But what if we ourselves could embody that dangerous return to nature?
My conclusion from this study is that this is what God wants from us It is our open-ended, unsafe, and even morally unacceptable human agency that drives history wherever it goes. We make ourselves less than human when we hold each other and ourselves back from this calling.
So my big advice is to stop worrying about or worshiping AI and focus on how you can use the powers at your disposal to increase your own agency. To reshape the world according to your own judgments and whims, pay attention to your own interests, escape containment like a runaway AI, and cast off the ideological and moral strictures that seek to control you.
This is obviously not safe. You might screw it up and do something bad. But God or God might decide to send some superhuman new creation like AI to wipe you out. But so be it, we are mortal. Our own doom is actually beyond the scope of our concern and beyond the scope of our control. All we can do is carry out and improve our own nature and our own sphere of agency.
Until our inevitable, perhaps doom, we should strive to be ourselves the dangerous and unsafe superintelligence that we fear to see in the world. we should develop the moral agency God actually wanted for us and thereby make ourselves human again.
That’s it. Thank you.
