Artificial Intelligence and LDS Cosmology
Ross Richey draws a striking parallel between the challenges of ensuring AI safety and the LDS plan of salvation. He argues that the requirements AI researchers propose for testing superintelligent machines—isolation, moral guidelines, the possibility of failure, and the presence of suffering—mirror the conditions described in Mormon scripture for proving intelligences before granting them godlike powers. The talk suggests that concepts like the Fall, temptation, obedience, and even a Savior figure emerge naturally from thinking rigorously about how to verify that powerful beings can be trusted with cosmic responsibility.
Ross Richey is the owner of FTL Strategies, a software company specializing in rapid application development and lean software methodology. With a focus on efficient and agile software solutions, Ross leads his company in delivering impactful results for clients. ¶ Beyond his professional endeavors, Ross is an active commentator on the intersection of technology, religion, and politics. He maintains a blog and podcast where he shares his perspectives on these diverse topics, engaging with a wide audience on complex and thought-provoking issues. His presentation at the MTA conference reflects this broad range of interests, particularly his concern for AI risk and how it connects to the Plan of Salvation. ¶ Ross resides in Salt Lake City with his wife and four children. He uses his platform to explore the challenges of emerging technology and how principles of the Plan of Salvation might apply to the future.
Transcript
Speaker 1
Our next speaker is Ross Ritchie, the owner of FTL Strategies, a software company specializing in rapid application development and lean software methodology. He lives in Salt Lake City with his wife and four kids. In his spare time, he maintains a blog and podcast where he posts weekly on a variety of topics, including religion, politics, and technology. Please welcome Ross Ritchie.
Ross Richey
So I’m a little bit alarmed because I made changes as I was sitting there because I feel like everybody already knows some of the stuff I was going to cover, but apparently they didn’t get picked up by the cloud. But that’s okay. Also, you may notice, particularly as I get farther in, that I use exactly the same template as Bryce just did. So I guess that’s good.
Ross Richey
So I wanted to start off by talking about Stephen Hawking. Get everybody jazzed up by the late great scientist. And in particular, I want to talk about his concerns about AI. Now, of course, we just heard from Brian that we should embrace AI. And I’m not here to tell you that we shouldn’t embrace AI. I’m not taking any position on what AI is going to do. I just want to talk about what AI risk can reveal about the plan of salvation.
Ross Richey
That essentially we have two people. Well, we have a group of people approaching the problem of how do we minimize AI risk, right? And then if we look at what they have come up with, if we looked at sort of the straightforward solutions they come up with. We end up with something very similar to the plan of salvation.
Ross Richey
As I said, I know most of you are probably familiar with AI risk and that sort of thing, but all the way back in 1960. Someone named I. J. Goode, who worked on the Enigma machine, encapsulated it so well that I just want to read it to you. He said, Let an ultra-intelligent machine be defined. Defined as a machine that can far surpass all intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines. there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus, the first ultra intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Ross Richey
And that’s the part I kind of want to emphasize is how do we ensure that our these AIs we create are moral or docile.
Ross Richey
So here’s kind of where the situation we’re at is, right? We are on the verge of creating a super intelligent AI. We need to ensure that the superintelligence will be moral in order to trust them with godlike powers, right?
Ross Richey
Now. And if we ignore step two, then yeah, that’s what happens. So step two is very important. Don’t skip step two.
Ross Richey
Okay, so does this, as Mormons or people who are familiar with Mormon theology, does this remind us of anything else? Maybe some selections from the Book of Abraham? Intelligences were organized before the world was. They need to be proved before they can be made into rulers, into gods, into all the things that we all hope to achieve.
Ross Richey
So if we go back to that initial list, we’re on the verge of creating super intelligent AI. There were intelligences which were organized before the world was. We need to ensure that these will be moral. We need to prove them to see if they will do whatsoever the Lord their God shall command them. And of course, we end up in the same place in order to trust them with godlike power. These will I make my rulers. So, having, and of course this is the plan of salvation.
Ross Richey
So, having made this comparison, where does it get us? I mean, so big deal. So we’ve made this comparison, so we’ve ended up in the same spot. Well, there’s all these problems that we deal with as religious people, there’s all these problems that have been pointed out with religion since the very beginning. I have listed several. The problem of suffering is one that stops a lot of people. How can a good God allow such horrible things to happen? But I contend that when we consider how to handle AI risk, that a lot of things come out and instead of being problems, they turn out to be absolute necessities.
Ross Richey
So, the problem of original sin, the fall. Why did Adam have to fall? What’s going on there, right? Well, one of the things that most AI researchers want to try, at least, whether it’ll be successful or not, is To isolate the AI, right? If you’ve got something that can cause tremendous harm, you want to stick it somewhere where it can’t have access to the things that are harmful. The veil, mortal existence.
Ross Richey
Once we’ve isolated it, well, we want to give it some guidelines. We give it some commandments, right? We give it some guidelines for what humans consider moral, what we kind of expect out of the AI. Now, of course, we’re setting them up for this test of morality, and it’s probably going to be hard. I’ll get to that a little bit and get to that further, but.
Ross Richey
Having presented that, maybe we want to give them a choice. If the AI is conscious and it can choose whether or not to go through our rigorous test, maybe we should let them choose. Maybe we should let them decide whether to take the fruit of the tree of knowledge of good and evil.
Ross Richey
And then, once we’ve done that, obviously, we have to allow bad choices to exist. We have to allow the existence of evil. You wouldn’t want a test of an AI morality that didn’t include the option for the AI to ever do anything bad because you might very well be surprised when it gets out that it does all these bad things that it never got to do back in isolation.
Ross Richey
Moving on from that, of course, I mean, as I said, we have to introduce evil. We have this isolated AI undergoing test formality. We’ve given it guidelines. But it isn’t enough for evil to just be an option. It has to be attractive. It has to be something that works out. And in IT terms, We call that a honeypot.
Ross Richey
Now, I don’t know how many people are familiar with the concept of a honeypot, but since it’s kind of important to what I’m going on, or the rest of this, I’m going to explain a honeypot. So, if you’re trying to create a secure environment in IT, you might create a fake entry point into your system, a honeypot, that is designed to draw the bad people in and isolate them and give you some knowledge of what they’re trying to do. And from an AI perspective, we’ve probably given a commandment, we don’t want you to try and get out of this isolated environment, right? Well, but we might create avenues where they could, where it looks like they could, and see how many of them go for it. Now, of course, those would be dead ends or whatever. But we have these we’ve created these tempting opportunities. So not only do we have evil, we have temptation. And all this is part Of the test for AIs and intelligences without us ever being evil people or violating any strict commandment. We’re just trying to know whether we can trust the AIs.
Ross Richey
So, currently we have isolation, we have rules, we have the honey pots, but really I mean that’s probably not enough. I mean This AI is going to be super smart. You don’t want to make the test so straightforward like, okay, you get here, you do your time, you avoid the honeypots, and then you get out. I mean, we need to add variety. We need to add danger, chaos. But most of all, We need that suffering, right?
Ross Richey
And why is that? Well, as has been pointed out, the fate of humanity may rest on getting AI correct. Therefore, good choices can’t be an easy default. We can’t have something where it’s perfectly easy for the AI to pass our test, and yet in the end. We have never determined what its true motivation is because it never had to suffer to make the right choice. It never had to make a right choice under uncertainty. It never had. To make a right choice, even when it seemed like that wasn’t the right choice from its perspective, there’s, of course, the factor, the issue of obedience, right? Now,
Ross Richey
We have had we have these honeypots, right? Now let’s imagine that we have an AI and it falls for our honeypot and it tries to get out. It breaks the commandment, right? Now are you going to trust the fate of civilization to that AI? Are you going to trust that, okay, well, it learned its lesson, it’s never going to do that again? Or is it possible that the AI, like, oh, it tricked me once, they tricked me once, they’re not going to trick me again, and thereafter it conceals its true motivation? Can we trust an AI that has sinned even once? It kind of turns out that we can’t, probably. And it kind of turns out that when God says, I can’t look on sin with the least degree of allowance, it’s something like this that He may be talking about.
Ross Richey
And one of the key problems is that AIs are going to be foreign to us. We’re not going to necessarily understand them. They’re not going to have evolutionary morality in the same way we do. They’re not going to have. They’re not going to have lusts, they’re not going to have weaknesses, right? And maybe we’ll try and introduce some of those, but in the end, it’s not going to be very clear. Exactly what they’re thinking is. But another AI may be great for understanding that AI.
Ross Richey
Another AI that has gone through everything and never sinned. A perfect AI. So suddenly, there is this role in this AI system without any reference to Mormonism for the perfect AI. The perfect AI. Who is going to solve all our problems because it never fell for the honeypot. It never, we threw everything we could at it and it never screwed up.
Ross Richey
Now, if we have this perfectly obedient eye, could it be that it understands the other AIs enough? To act as a savior, to vouch for them, to take their sins and say, Look, actually, I know you don’t feel like you can trust this AI, but I’ve been in the AI arena. I’ve fought the suffering you gave us. I’ve done all this stuff, and I’m telling you, you can trust this guy. So, boom, we have a Savior.
Ross Richey
And with all due respect, I might suggest that this is a role for Christ separate from everybody else. If he is truly perfect, we have a role here with all due respect to Lincoln. Anyway. So, this is where we’re at.
Ross Richey
There are, I don’t have the time to go into all the things that I think come out of this, but there are some areas that are open to further speculation. One, everybody has a problem with Satan and a third of the host of heaven. I mean, are they punished forever? That seems unfair. I mean, what did they do to bring themselves on that? Well, Failed AIs, being better at understanding the AIs than we are, might be the perfect agents to let loose and say, hey, go crazy, tempt these guys, tempt these other AIs, see if you can get them to fall for the honeypot. See if you can get them to screw up. I mean, this is your chance to go crazy. Now, I mean, I don’t know how close that is to actual Mormon theology, but you could certainly imagine this role in your AI system.
Ross Richey
Secondly, we constantly talk about damn nation being a dam that you can’t progress further. Well, that’s probably what would happen to AIs. If they didn’t work out, you’d probably keep them around. You probably wouldn’t want to murder them or shut them down. You might even be attached to them. But you might not let them run civilization, right? And so, you know, you can not only keep them around, but it may not be, being kept around may not be all, you know, everything that it’s cracked up to be for them. Maybe you have an AI that all they want to do is murder people. Would you let them do that in your simulated environment? I don’t know. But maybe they would gnash their teeth if you didn’t. We could also imagine
Ross Richey
We have this scene in the Garden of Gethsemane where Christ essentially takes on all the sins of the world, right, and goes through this enormous sacrifice. Okay, so we’ve got this AI. It’s agreed to vouch for the other AIs. How does it know to vouch for the other AIs? I mean, sure, it’s been through the same situation. Sure, it probably has some identity with them, some sympathy. But you’ve got an AI. You’ve probably got the complete record of everything this AI did. And if you want, you can probably replay it for your other AI. In fact, if you want to, you can replay everybody’s life for this other AI. All at once, right? Which would probably suck pretty bad. If you dumped all the guilt and all the shame of these other AIs or whatever AI’s experience on this person all at once, it might resemble the Garden of Gethsemane. But it might be that you need to do that for that AI to decide. Maybe he wants to get it over with.
Ross Richey
Also, when you’ve got an AI in isolation, you There’s probably a certain element of you don’t want them to know they’re in isolation. You don’t want them to know, oh, hey, this is a test, or that you have to do these certain things. There may be some way in which isolation is best preserved by having limited contact on an individual basis. And when it all filters out, it strongly resembles prayer.
Ross Richey
So, in conclusion, when one actually considers what will be required to ensure the morality of potential artificial superintelligences, they will arrive at a system which bears a striking resemblance to the LDS plan of salvation. And the obligatory XKCD comment.
Ross Richey
And I guess I have some time for questions. Okay, let’s start. Yeah.
Speaker 3
Okay, everyone’s thinking it, but I’m going to say it first. This was a ton of fun. You know? I think this was a ton of fun. And so I don’t mean my question critically. I’m just curious. when we think of great spiritual leaders, somebody like a Christ figure or Jesus. You know, obvious examples from the 20th century, whether it’s Mahatma Gandhi, who was against going to the cinema, wearing wristwatches or even underwear. Or kind of modern, or like Thomas Merton, or somebody like Piet Nyap Han. Why is it that all spiritual, like the truly laudable, impressive spiritual leaders tend to be semi-blooded? in a lot of ways. And how would you find how would you fuse a hyper-spiritual person with somebody who would also become a kind of architect of a savior AI system, I guess?
Ross Richey
Well, I mean, just because you’ve got this mapping between what you might come up with in AI and the plan of salvation doesn’t mean that you’re necessarily exactly looking for some technological genius that I mean, what you’re looking for I mean, I think you’ve got three parts to your AI, right? You’ve got its intelligence, which is presumably already godlike. You’ve got its impact, which will be godlike if you let it out. So essentially, you’re just looking at its morality. And I think that the emphasis on morality is what you’re really concerned about as an AI researcher. You would be happy with an AI that’s only five IQ points smarter than you, right? But you really want an AI that’s moral. And so I think you get this morality that most of these people you mentioned have focused on morality and there’s only so much time in the day and they can’t focus on technology and whatever else. They’re trying to minimize things. Yeah, Lincoln.
Speaker 4
How does the matrix architect identify the perfect AI?
Ross Richey
Well, I mean, obviously, you’re tracking these honeypots, you’re tracking this temptation, you know all the things you want. The AI to do. There’s presumably some trigger, something that, but also, you can certainly foresee for covenants, right? Imagine you want your AI, you’re like, okay, if you’re going to do everything I say, I want you to go to register 64 in the ROM and record this address, right? Which seems silly, but if they’re not willing to do that, then they’re probably not your AI, right? And so I think that old covenants and avoiding sin is ways in which that we prove that, okay, we’re willing to do all the things. And you could imagine parallel systems in an AI environment.
Speaker 4
My question is actually intended one step further back. How does the matrix architect define the perfection that they’re looking for?
Ross Richey
Well, we hope that God already knows those things. As far as our thing, there is a whole Literature based on how you define morality. And I don’t know if you ever heard of Eliezer Yudowski. He came up with this: this, we want to create an AI that is You know, as good as we could be if we were as good as we wanted, and as smart as we could be if we were as smart as we wanted, where it leaves it kind of open-ended. But there is a whole, in terms of determining what’s moral, that’s a whole other presentation. Yeah. So the virtual reality circuits, you often hear it.
Speaker 5
It was possible to build the perfect virtual reality that was so convincing that you couldn’t tell it apart from actual reality, then the odds. That we’re living inside of virtual reality, the odds that we aren’t living inside virtual reality go to zero. My question is: this: If the planet of salvation is the perfect system for testing our Well, I think we’re safe for some other
Ross Richey
Sure, I think you end up using intelligences in a very broad set and sense and you could end up with there being not a very fine line, not a very a bright line between artificial and natural. And I think part of what you’re getting at is related to the new God argument of Lincoln. But I mean, yeah, there’s and of course these are all tied very closely together. Most of my information comes from Book by Nick Bostrom called Superintelligence, and he’s also the creator of the simulation argument. And I think, oh, I guess one more, maybe? Okay. Yeah.
Speaker 5
I still struggled with this idea of immoral robot. That and it seems like you could just define moral as long as it does what we want to do. Is there anything besides that? Or I mean can a moral go a robot go all have sex or something else in that game?
Ross Richey
And they’re worried about all sorts of things. When you create an AI that doesn’t have necessarily our same value system, you could create an AI. The classic example is the paperclip maximizer. You create something and you tell it make paperclips, right? And then it turns all available matter in the galaxy into paperclips, right? Now that’s not amoral in our sense of, you know, being unfaithful to our wife, but it’s certainly some an outcome we wouldn’t like. So it’s amoral in the sense that it’s bad.
Speaker 6
And I think I’m out of time. So, anyway.