Keynote - Alt.AI: Why the Best Future of Machine Learning Is Modest

This keynote presents an alternative genealogy of artificial intelligence by examining Soviet contributions to statistical thinking—from Lobachevsky's non-Euclidean geometry to Markov chains, Kolmogorov's probability theory, and Yushchenko's pioneering work on pointers. The speaker argues that AI is fundamentally "people using statistical tools" and calls for a modest, humane approach that acknowledges the plural, uncertain futures these techniques have always modeled—drawing parallels between LDS values of community and kinship and the collaborative networks that powered computing's development.

Benjamin Peters
Benjamin Peters

Benjamin Peters (born 1980) is an American media scholar, author, and professor known for his work on the history of communication technologies, information theory, and the social dimensions of digital networks. He serves as the Hazel Rogers Professor of Communication at the University of Tulsa and has held affiliations with several prominent research institutions. Peters is best known for his book How Not to Network a Nation: The Uneasy History of the Soviet Internet (2016), which explores the failed attempts to build a nationwide computer network in the Soviet Union and examines how social and political systems shape technological development. The work received widespread acclaim for its interdisciplinary approach, bridging media studies, history, and science and technology studies. He has also edited Digital Keywords: A Vocabulary of Information Society and Culture (2016), contributing to critical discourse around the language and concepts underpinning the digital age. Peters’s scholarship carries resonance for those interested in the intersection of technology, human potential, and collective aspiration. By investigating how societies envision and fail to realize transformative technological projects, his work illuminates the deeply human—and often ideological—dimensions of networked communication. His research reminds us that the tools we build to connect and elevate humanity are always embedded in moral, political, and even spiritual frameworks. For communities exploring themes of theosis and the cooperative pursuit of transcendence through technology, Peters’s insights into the promises and pitfalls of networked societies offer valuable perspective on how human aspiration and systemic constraints interact in the ongoing project of building a better world.

Transcript

And uh can I just begin by saying what an extraordinary act to have to Follow. I think just never mind the semantics of like the least likely four word combination that I can imagine: Mormon, transhumanist, barbershop, quartet. Has now been superseded by, first of all, a Mormon transhumanist barbership sextet, and now even more the experience of it. What an extraordinary acoustic reverie. I’m Deeply grateful for the talent that has brought us all together and delighted to be with you today.

So I hope to offer to today simply a kind of passing speculative As ecumenical and as real exercise in the imagination of what AI could be and has been. Namely, there are, as we’ve noted, enormous great expectations surrounding AI today. Predictions of compute in Claude GPT-4 or Claude in the next ten years are extraordinary. We have, of course, tech moguls and billionaires prophesying utopia on the one side and plenty of safetyisms. on the others uh decrying apocalypse. I will simply point out with Lee Vinzel that both of these participate in the same crita hype. Cycle that I think we need to take a move away from.

Here’s, for example, a logarithmic prediction of the cost of running the next generation GPT-4 model. Deleuze once noted that technological history was serrated, perhaps. This is what he meant, namely that the cutting edge of innovation continues The blue-dotted line here, to bend ever upward towards larger, fewer corporations. I mean, if you have not billings, but parts of trillions of dollars of capital available to you. You know, you may play this game in the thirties and the thirties. And yet at the same time, it also blend bends down in the Blue Teeth. Towards local applications where AI becomes affordable or available. And both of these tensions exist simultaneously because they’re part of the same discourse, the part of how we buy and sell AI. In any case, the expectations are indeed great.

My hope today is simply to remind us that Dickens, in some way, already wrote this book. and to nudge us, as it does in great expectations, to turn away from the kind of a world of social advancement, wealth and class, and towards rather modest increases. in affection, loyalty and conscience? What would it look like to outline what resources are available to us in the LDS tradition to think towards a more self-checking, modest and humane Set of human relations that undergird AI automation and statistics.

So this is how I’m understanding statistics, or excuse me, artificial intelligence. Artificial intelligence is people. People using statistics, statistical tools, then wrapped up in often a kind of crude, and I want to be clear, I’m not talking transhumanist, I’m talking. crude post-humanist philosophy, which often looks like images like this, and then deep fat fried and techno capitalism. That’s where we are today.

Right, and that’s what I’m offering today is an alt-AI or an exploration, an alternative genealogy of what artificial intelligence has been and could be if we were to learn from other variables. Namely, I think using some of the Soviet materials that I’m going to turn to in a second, we can control the last two variables here and think about how people have been using statistical tools elsewhere. Namely, the Soviet century, as this book outlines, reads and offers a statist tradition, a statist, and statistics are etymologically the same before we had the word. Statistiken from German, statistics in English, we had political arithmetic, which I think is interesting, often tied into large state explorations. We have ways of thinking about statistical governance over the long last century. What can we learn from that? And besides simply controlling for the techno capitalism variable.

And let’s just take a moment to talk about that again. Today I would read AI is standing for among other things additional investment, just as often as it stands for artificial intelligence. Even from its coining, John McCarthy in 1956 start with Was effectively creating a term that was gunning for grantsmanship and funding to separate himself from the previous generations, the saturated market. of cybernetic researchers. In some ways, I’m not d dismissing the whole topic. I’m talking about the word. The word AI attends has much more to do with PowerPoint than it does Python. I think if we can take that and look at it differently and open up new possibilities, we’ll have some new things to say.

So what if the future could look differently? And what if the past already has? Here are some provocations that I hope will uh uh help us uh see things anew.

Oh, no, before I do that, um I want to just briefly note, what would it look like before we look at the Soviet space, to look through the lens of LDS values? What would an L L D S uh um kind of AI approach look like. I think some of them would acknowledge that we are emerging from an American nineteenth century restorationist origins to become a modest, but interestingly transnational Americanizing Force for Modernity. And if you can kind of take that, think about our minority, a minority modernity that moves people from persecution to uneven privilege. then I think we have the terms by which we can begin to think through our own value systems, our own sets, what work the Church is doing in the world.

Simultaneously, I think that an LDS approach to AI might be consonant with the very recent March 13th, 2024 guidelines that the Church released. It might recognize a kind of balance of possibilities, Elder Gong calls it, neither giddy nor alarmist, quote unquote. in our approach to this tradition, that would take seriously collectivist communities. It would think about beehive-like record keeping and data processing. and diversely embodied creators working in creative material environments. I think including especially, as John mentioned, our stewardship over this earth.

And finally, I also think that perhaps an LDS approach to AI, whatever that might look like, would have a materially grounded understanding of what intelligence looks like. Nancy, in many ways, has opened The hood to us this morning already. Not necessarily artificial intelligence as a term, but whatever intelligence means. That’s a question mark that I think we are distinctly well positioned to try to think through. So if some of these are not a terrible first draft, get some ideas up in the air about what LDS AI values might look like, I’d like to offer some working propositions meant to provoke and mix up our thinking a little bit. Again, drawing from this project I’m working on.

So here are eight propositions Perhaps the best way to understand AI is to look away from it. Namely, we didn’t know that the French Revolution was the French Revolution. There were a lot of revolutions happening. as Eric Habsbaum points out, until after the American Revolution, after the Spring of Nations, after the Russian Revolution. And perhaps so it is with AI today. Let’s find some perspective by looking elsewhere.

Second, what if we were to take a different tact on what Latter-day might mean? Namely, instead of we might be wrong to project apocalypse and singularities continuously into the future, when in fact another reading of Latter-day might invite us to take Comfort from indigenous and post-colonial peoples and insights, who remind us that the apocalypse has been happening locally and unevenly for centuries. So 70 years about Soviet thinking around statistical governance, a core commitment that remains continuous today, ended the world as they knew it. It was a local apocalypse, with ended in Chernobyl plumes and a collapsed economy. And so I wonder if the Soviet experienced a kind of Proverbial AI apocalypse. What can we learn in these latter days, the days that come after? Maybe John Ogden has just pointed out that the apocalypse, once normalized, still can work out. I’m trying to neutralize apocalyptic language here.

Third, offering the idea that smartness is a trap, by which I will simply observe that if you look across the classical liberal tradition of great books, there are thousands of pages devoted to terms like wisdom, knowledge, education, analysis, judgment, and fleetingly few devoted to intelligence as a term, except for Machiavelli and Hobbes, who use the term almost exclusively to talk about state intelligence or spycraft, which I think is interesting. So in other words, what I’m saying is that smart and intelligent as terms arrive as relatively new kids on the language block. Smart, ek etymologically, is related to the German for pain, schmeatz. Ow, that’s smarts. A cuttingness, a sharpness, a dose of embodied pain. And that’s okay. I’m simply trying to recognize how smartness anchors us in bodies.

Fourth, just a bit of context, I think as a gloss of my first book, the 20th century showdowns between market capitalists on the one hand and status bureaucracies on the other, that that duel Is for losers, and everyone lost the Cold War. And the multipolar world with heartbreaking fratricidal wars that we are experiencing now, including the expanded invasion. of Russia into Ukraine makes this point even more poignant.

In some sense, then I wonder playfully if we might play around with a revision of how the Soviet Union did not end. Are there ways of seeing big tech as a kind of new Soviets today? And now, let me be clear here: there are way more differences than there are similarities, but at least we can pause to consider. As a cautionary reflection, that when we’re talking about super-powered oligarchs committed to long-term planning and population-wide surveillance, We should be a little bit unclear whether we’re talking about late Soviet or whether we’re talking about not tech as it’s experienced here, but multinational tech executives. And I hope that we can begin to make that political-economic distinction.

All right, and now it’s these last three that I want to develop in a little more detail today. My hope throughout is not to adopt any tired tropes of AI skepticism or cynicism, or the opposite, which again I think is much of the same thing. My hope is to uncover and brush off previously overlooked material from the prickly bramble of history that might help us refresh our modest and sustainable approaches to AI going forward. Okay, so these three are going to draw from this ongoing book project. Happy to say more about. Here’s the working table of contents. And what I’m going to do to make those last three points is I’m going to briefly run through four case studies.

First, from left to right, Lubachevsky and non-Euclidean geometry. And what it teaches us about complex spaces since the 19th century, and how the 19th century continues to work today. The second, Markov, the Markov chain behind which I see as a kind of theotechnic or a rereading of chance and our relationship to God. Third, Andrei Kalmogorov’s formalization of modern probability theory, which is definitely behind a whole lot of Bayesian probability networks and neural networks and things that we’re looking at. Hopefully it will also teach us why AI hallucinates, or rather I prefer the term concocts today, and also why the future, like the past, has long appeared queerly plural. And I’ll talk about why Kalmogorov offers a kind of analogue to the Alan Turin story in a second. And finally, fourth, closest to me, Yekaterina Yushchenko’s subtle, often ignored technique for transforming programming into a complex space, and in particular the role that women play in powering computing then and now. And each, I think, has a moment of LDS resonance from the previous values I’ve mentioned. Okay, overarching arc, all of these serve a tradition of thinking about AI as people using statistical tools in complex spaces. So let’s buckle our seat belts. We’re going to move quick.

First, the geometer Nikolai Lebachevsky helped formalize complex spaces in the 19th century. And it might just be that this entire case study is an excuse for me to put these two portraits next to one another. Ready? Okay. So on the left, or is it the right? In December of 1829, Nikolai Lebachevsky rejected the fifth postulate of Euclid to restore a more flexible, original form of geometry. while being accused of harboring Prizniki Byz Brozia, or the signs of godlessness, in a Muslim-majority, autonomous region of Kazan, Russia. On the right, or is that the left? A few months later, in April 1830, Joseph Smith, consider the analogy here, rejected the Nanician Creeds of the third century to restore the Church of Christ. A more flexible yet original form of Christianity, while also being accused of heresy on the bustling edge of the American empire that was, before westward expansion, upstate New York.

And I’ll save for later the kind of playful analogy or suggestion that 19th century Mormon history follows a hyperbolic coordinate system in its polygamous relationships. that there’s more than one line that can pass through a point and not intersect another line on a plane, and that 20th century Mormonism in particular bends back into Of far right into an elliptical coordinates of the nuclear family and American capital, a kind of concentration and core centralization. Namely, for Lebachevsky, that less than one line can pass through a point and not intersect another line on a plane.

Instead, I’m just going to simply emphasize that Lebachevsky’s non-Euclidean geometry gave the nineteenth century formal tools for thinking about space and environments complexly. And it sped subsequent innovations in lots of practical applications like Geography. Turns out the Earth is elliptical. So is every other heavenly body. And elliptical geometry is really useful for that. As well as really abstract concepts like Riemann spaces or Hilbert spaces in mathematics that we’ll come back to in a moment when we talk about Kolmogorov.

So, what’s my point so far? Besides the lovely, like You know, creed old postulate rejection in the early 19th century that the 19th century taught us modernity how to think about complex spaces, and AI inhabits a complex space. Okay.

Second, Markov chains, a theotechnic or a tool for rethinking our relationship with God. So here on the left, Andrei Andreyevich Markov Sr. , and by the way, his son, exactly the same name, but Jr. , is a major Soviet cybernetician and mathematician. It was 1905, and he was living through the brewing social unrest in St. Petersburg, effectively an anti-Tsarist, anti-Orthodox mathematician. And everything about this unrest and historical environment helped organize him against his colleague there on the bottom right, Pavel Nekrasev, pro-Tsarist, pro-Orthodox colleague. In Moscow. I mean, it’s just like a meme rap battle between these two, almost too stereotypical. They fell into a fascinating, hot theological debate.

that ended up, I argue, rewiring the world. Namely Nakrasov, on the bottom, argued that since the law of large numbers holds for populations generally, this shows an existence of a beneficent God, in particular a God Who wills us to enjoy free will? And that he’s basically arguing that emergent patterns prove that chaos was not supreme or general. but that each of us in expressing our free will participates in probabilistic patterns that express a higher order of God’s will.

Markov says nonsense. And in 1905, he develops a simple technique, a Markov chain, to show how both randomness and order can coexist in local, small, but complex spaces. A Markov chain today, which we might think of a memoryless sequence of events, describes what happens next based off of things now and only now, memoryless. We’ll come back to that in a second. So let’s start at the top left. For simple events or points in space, they can have causal but still random relationships with other events. Imagine for a Markov chain that you’re a drunkard in 1905 St. Petersburg public square and you’re trying to determine what your next step is. A Markov chain would describe the current state of affairs and the likelihood of the next step. Do you go from A to B? Do you stay at A? The next step of the system depends only on the current state. Independent events can take random walks.

Not only is Markov chain a theological technique and a response to a theological debate. we see its airs scaled up today, Markov chains, fields, models applied in countless forms across the top right, across the top there, Brownian motion. Markov chains are fundamental to information theory since the post-war period, directly inscribed into much machine learning, neural networks, and even PageRank, the heart of the Google search algorithm. Basically, if you want to think about like every time you search for something on Google, you imagine that you’re taking a Markov chain random walk across the the public square of internet links, and you’re stumbling like a drunkard, not knowing what the past is, you only know the present moment, onto what ranked order of sites are according to the strength of other sites linking. to those original sites. In other words, if you’ve ever felt out of place without your Google search tool next to you, know that technologists have called Markov chains memoryless for a reason. Perhaps there’s a kind of, we’re inhabiting the memorylessness of a Markov chain here.

Lots of other cool things to talk about there. Brownian motion, both Wiener and Komogodov will later turn on. Reliant time series and Markov chains to do more. But let’s keep moving.

Just a curious aside, I want to note that in 1913, Markov presents arguably the first paper in digital humanities to the Academy of Sciences in St. Petersburg. which is a hand-counted analysis of over 20,000 characters uh derived from the first bit of Evgeny, Eugenia Negin, the masterwork, Pushkin’s masterwork. Imagine like Romeo and Juliet, right? Meant to demonstrate Pushkin’s prose genius. And while I don’t think he’s wrong to pursue that goal, I think the story is even more interesting. In that Evgeny Onygin performs his Markov’s point in another way, as the self-possessed and powerful heroine Tatiana embarks on random walks. through forest pathways, even reading the marginalia of the library of Njeegen, her would be lover down in the bottom, only to discover in these chance encounters, in this stumbling upon chaos of the current moment, of the tragedy to come, namely, that a niegen is a cipher and a misanthrope. O nieg in is a deliberate play with negation in the Latin. Digital literacy, I think, has ever been learning to read the traces of humanity between the lines of code ever since.

So what’s the point here? That AI rests on a technique born out of regularly rereading theology. The Markov chain is the theotechnic.

Next, Kamogorov’s probability. So, natural language processing, Bayesian probability networks, and other modern-day techniques behind machine learning. rest on an almost unacknowledged revolution in modern probability. So by unacknowledged, I simply mean that a lot of most of us don’t know about it, and then when you talk to mathematicians about what Kolmogordov did here Kamogorov, I should have had a better apologies. The third photo to the left, he is on the left, not in glasses. But when you ask mathematicians about what Kolmogorov did with probability, they will effectively say he solved it in 1933. When he applies probability theory to measure theory in this famous book, Grundpeggriffe der Waschalischkeitskrechnung.

In it, basically, he shows he lifts probability from this low-hanging, messy empirical science full of sample error sizes and sampling issues. counting beans and flips. He takes that and generalizes it and gives it a mathematical engine by showing how probability axioms map one on one onto measure theory axioms. And measure theory is a preexisting form basically of infinite dimensional geometry. Which again we inherit from Lubachevsky through David Hilbert. So imagine a space that can have any amount of a conceptual space with any amount of dimension.

So what Kolmogorov shows is that in a potentially infinite measure space, one can understand, just as Markov did, an event as a point. as a point, and the likelihood of an event causing another event as a link between that point and another point. I’ll show you some examples here in a second. And with this, you can create trivially complex many dimensions as much as you wish to say. In short, As the mathematicians point out, Kolmogorov’s probability theory just worked. And on it can be built all the subsequent tools for machine learning, Bayesian probability networks, and the rest of it. today. In fact, it wasn’t until like fifty years later that somebody at Stanford proposed a revision to his initial theory. So if if you’ve understood like basically what matters and so what, now I want to offer a curious story about how history and society can help us think differently.

Even a tumultuous and revolutionary story. Maybe not unlike Alan Turing’s invention of the Turing test, which he did while under British police anti-gay pressure. Here Kalma Gorov, then a young, elite, and gay mathematician, according to Masha Gesson. formalized our techniques for seeing the future in a very moment when his life was his future was deeply, deeply compromised. So Komogodov wrote and published the book in yellow in 1932 and then 1933, just as 1930 to 1936, peaking in 1937, the Stalinist purges were rising to their terrible height. And Komogorov was being forced, as he’s writing this vision of how do you model the future, forced to publicly testify against his former mentor, Nikolai Luzhin, here, seeing the stamp the second to left. at the top in a Stalinist show trial, something like the far left uh photo. And that Komogorov was being blackmailed by the KGB simultaneously for his alleged gay relationships with his closest lifelong colleague Pavel Alexandrov. On the right in glasses.

So, what I’m offering is a story about how Komogorov formalizes how it’s possible to model. Infinite potential futures at the very moment when he could not possibly know his own. His theory is performing his life. A knock at the door at night would routinely but unpredictably disappear neighbors, comrades into the gulag. He was guilty of nothing except loving his best friend, and yet his future with the state might end at any moment. How else to respond to radical revolutionary uncertainty except to formalize a model of infinite futures? And without this crucial pivot point, again in 1933, probabilistic networks are not thinkable today. Instead, we have much machine learning that assumes modern probability spaces and Kolmogorov measure spaces.

Let me try to give an example here briefly. On the left is a conceptual space, a measure space, in which all possible outcomes of two dice rolls are imagined. These rectangles represent different dice events. We can then predict the likelihood or the link strengths of different events and their relationships, different colored subspaces within the space on the left. and map them out into a geometric space of trivially large dimensions here on the right. If this seems imaginable, right, you’re moved from the left to the right. Do this while the KGB is knocking at your door, disappearing your colleagues for lesser infractions than what you think they have over you.

I think there’s two reasons why this matters. First, again, modern probability theory on the right offers us an imagination of potentially infinite many futures. Whereas the earlier stuff, frequentist statistics, measures one real world, or at least it’s an approximate convergence to one real world over time. Whereas modern probability theory does not assume an ontological unity. Rather, the present fractures into any number of possible futures. Second, this fracturing I think helps us understand why large language models today hallucinate.

Or perhaps better, we can, I assume a better word because it assumes no will is concoct. Consider how a complex probabilistic space composed of different vectors of knowledge imagine, go back to your You’re inhabiting a space like this on the right. There’s a bunch of different vector fields with different knowledge statements in it. And two of those existing vector spaces are already true. One of them says, France gave a statue to a new country. And the other one says, Lithuania is a newly independent country. And now you’re asked to imagine what the most likely connection of those are. Is it not unreasonable? Ask a measure space of infinite dimension, that perhaps France gifted Lithuania the Vilnius TV tower in 1980? You know, flash. It didn’t. This is a hallucination. But our per as we like to imagine, I’m not sure if this is true or not, that humans’ perception of reality eventually converges to one, I think it’s important to at least acknowledge that maybe we’re a little bit more like LLMs in that Their imagination, like Kolmogorov’s, trivially diverges to many. So, a conception of infinite future matters elsewhere, too.

Let me just briefly again sidebar a couple other ways of thinking about populations. Consider how, under the guidance of Theodosius Dobzhansky, a Soviet Ukrainian working at Columbia’s fruit fry labs in the late 1930s, after Kolmogorov. we have the modern synthesis, or a combination of genetics and evolutionary theory, that took shape around statistical population landscapes. And these landscapes, statistical population landscapes or fitness landscapes, are probability spaces of a kind for measuring population change over time. Thus, we have the 20th century welcoming models for complex spaces of changing environments, evolutionary forces, genes acting within populations. Here’s and they can do so dynamically. The population or the gray dots. And yet it can model these possible futures. Statistically, without determining it, without one future but many, that described and predicted the multiple converging futures themselves as an environment. So here the environment changes, that’s the rainbow blue plane. Again, multiple futures modeled in a complex space. Here’s one more.

In and after World War II, there’s this technique called the CEP, the circular error probable, kurugavoy, varyatne, or kleninya. that would join with a coming generation of the scientific technological revolution in military technology across the Cold War to drive evolution from bombers to intercontinental ballistics to rockets to thoughts of nuclear-tipped satellites to smart guided missiles eventually. That CUP would kind of begin it, the CEP, the circular error probable, was a simple measure of a weapon’s precision. So look at the thing on the right and imagine that you’re seeing we’re imagining a green target zone, a black dot that’s our target, and a spread of red strikes. Does that seem reasonable so far? Except at the center of it, there’s a Soviet technique. There’s a modern probability at its center, which means that the black dot is not the target’s location. It is the fixed perspective of the gunner into the future. And the red dots represent not the strikes, but the plural, in fact, trivially many possible locations of the target. The target is moving in time. One target, many futures. Like Markov’s drunkard next step in a public space, like Kolmogorov’s future amid the purges in the nineteen thirty three, Like a demographer surveying a population amid climate change, it’s hard to know how to survive when the next step is plural, when the space is complex, and when stakes are so high.

So I’m not claiming, of course, that Kamogodov was the first to formalize the ideas of many possible futures. But as we can develop elsewhere in the book, there’s lots that we can learn from the uncanniness of this particular corner of the early 20th century. Und um heimelichekeit here, uncanniness, unhomeness, is a term that Freud popularizes perfectly in 1919. What’s the point that I’m trying to make with all this dark, gloomy material? It’s that we have long inhabited complex spaces. and that we now have the tools to formally describe that fact, Ola Lebachevsky, Markov, and Komogorov So that our orientation to the future can appear long, it has long been queer and subject to models and complex spaces.

Finally, a fourth point. And I think this is to a rarely but increasingly acknowledged degree, that women power programming, and they long have. Here’s a largely unknown story of how Yekaterina Yushchenko, an algebraicist and probability theorist, helped turn computer programming into the complex space that it is today, or at least almost. Through the development of what she called ni priamna addresatia, or indirect addressing, in a 1955 book here, address programming. And this is the same technique that Harold Lawson would call pointers almost a decade later in 1964. Donald Knuth famously would considers pointers one of the one of the most valuable, quote, valuable treasures in computer programming. So let me tell you a little bit about her fascinating backstory.

Yushinko, after surviving World War II in Central Asia and fleeing with her family in the late 30s, like Komogorov, they too were persecuted. For their resistance to the state, they fled to Samarkarand, where she studied and then came back to Kiev, Ukraine. after the war, where she was hired in 1952 to lead the programming team of one of the first programmable digital computers in Europe. Miesim, and the Miesum, which means the small electronic calculating machine, was located here in the bottom right. In a two-story dormitory next to an abandoned monastery and cathedral with electricity but without running water in the forested area of Feofania in southern Kiev. The Miesim, or small electronic calculating machine, was not small. It filled both floors. It managed a stunning 50 computations per second. via 6,000 vacuum tubes. And this hardware limitation is actually my whole point.

I’m not interested in scaling that up. I’m not like I’m not going to make a sufficient change in quantity, changes, quality, historical materialism argument. It’s about the hardware here that really matters. Because, like all von Neumann data bus processors, the computer that she had to work with. Would process instructions one bit after another, just like they do today. And she wondered, how could such a tool be used to calculate mathematical problems of enormous complexity, right? Air turbulence, low orbit variations, fluid dynamics, other obviously military programs that she and her team and Kolmogorov and the rest of them are all tasked with. How in the world can you translate programming’s complexity input into linear outputs that the hardware can follow?

Well, indirect addressing, it was her answer, or what we call pointers. Namely, it’s really simple. A pointer points to a bit of stored memory elsewhere. Like a page number in a book index, it tells the program where to look for something stored elsewhere. And this simple semiotic index makes it possible to store not all memory in the lowest level language. but it allows you to create and then move between lower and higher level languages all the way up to the block-oriented programming that our kids are using. Software gains a kind of architecture, an architecture of arbitrary complexity, thanks to the simplest techniques, like a pointer.

So I think another point to really stress here is that while we have seen an emergence here in the literature in the bottom right, Finally, I’ve begun to establish just how vital women are and have been for the history of computing, especially for programming. Just make a couple of points here. On the left are the programmers for the ENIAC, the first digital computer in the world under Von Neumann. And on the right top, we see programmers at Belechi Park, who helped Alan Turing’s team break. The Nazi Enigma Code, and then, by the way, lost Britain’s competitive edge in computing by insisting, as Mar Hicks shows in Programmed Inequality, the book in the middle. that through basic institutional organizational questions like patriarchal hiring and management structures, Britain lost its computer edge. To this, I hope that we can add in the future the stories of Yushchenko’s team here seen in the winter of 1953 as they taught Miesem to speak complexly. I’d be happy to say more about Nadesh Domichenko or Marina Morcharavets or others who she’s working alongside.

So, what in the world are my points, and how in the world do we tie all this together? I’m not simply saying that Slavic speaking stories matter, although with a nod to our keynote, Slavic women in particular matter, to the story of AI. We might also look here to the inventors of PayPal. Grammarly, WhatsApp, Ethereum, Solana, Chainlink, Telegram, for what my colleague Maria Tobolzovich and I call the tech avant-garde of the post-Soviet IT talent. Namely, groups of hundreds of thousands of people trained to technocratic excellence in free, excellent public educations, and then often set loose as a mobile, globally mobile IT labor class. bringing ferocious talent, often for relative cheap, to Boston, Silicon Valley, Tel Aviv, Shanghai, Berlin, and others today after the collapse of the Soviet Union.

So in this case, I think it’s a prime reminder that AI, well, it must have necessary condition hardware, it’s never only been about the hardware. In fact, some of these IT talents can program and program elegantly because they did not see a computer before they arrived in the mid-90s. They learned to program economically. without computers. So the story of Soviet AI, in other words, has faced multiple apocalypses or the ends of the world, and yet it lives on today. And here I feel much in line with what John was inviting us to think about. So let’s review our main points again and see if we might see them differently in light of some of the materials that I have up in the air.

It has no necessary commitment to human-shaped forms. It has no additional investment branding exercise. Instead, we can see across Soviet AI just one of the many transnational traditions of people using statistics. statistical tools to model complex spaces not for one future, but for many futures. That’s what I see continuous.

I think we can also see, number two, that while we’re prone in our discourse to talk to flex and exponents and power laws, we often overlook the historical and social raptures and ends of the world that are all about us. We don’t need to look to the future to see that some have already survived, a kind of AI apocalypse, a moment where predictions fail, or in the famous title of Alexei Yurachak’s book, where everything was forever until it was no more, about the Soviet Union. And at the same time, here and again, others are living a utopia all about us. And the coexistence of multiple real worlds, I think, helps humble me. and limit my predictions of the effects and futures of AI. Even as I hope to make them better with you all, I share the common project. The present is also many.

What else may be the healthiest inheritance of an LDS tradition that works in pluralities? We have plural Zions, plural scriptures, still today plural ceilings, even plural gods. Is it that much more not to generalize too far into our beautifully, vexingly pluralistic world and future? I’ll skip over three to five now simply to emphasize what I hope we also see across this in six is that computational spaces have long been complex.

Space has been curved since well before Lebachevsky. Sorry, flat earthers. And it bends with Einstein to the space-time distortions of the gravity of material bodies in motion, or in the scrum of live relations one with another, if you want to go theological. Our models of the future since Kamogorov’s teams are also models of our own historical present. And nothing dates quite as quickly as a prediction of the future. And software since Yushchenko’s team has translated between complexity and linear simplicity. Again, formal, social, historical complexity all work together to help enliven what not only could be but has been. the history and the future of AI.

Number seven, many of the techniques that we’ve talked about, the things that still drive AI today, have theological inflections and sometimes origins. Lebachevsky’s anti-creedal theories, Markov’s chains that recommends uh that welcomes plural gods of prediction and chance, Kolmogorov’s personal prayer that he might survive the heresies of his mentor. Lujin, who he’s betraying, who’s being forced to betray publicly. What were Lucian’s sins, by the way? Among others, they were for encanting the Jesus’ prayer. In which, like his transfinite set theory, the name of God, or the God has a name, but no measure. And he did this in the basement of an Orthodox cathedral in secular Moscow. And by the way, the future that Kalmogorov is inhabiting in this moment is so unpredictable that even as dozens of people disappeared around him, he survives the apocalypse, and so does his mentor illusion. No one knows why. We don’t have a known explanation for why illusion survived. The past as well as the present is uncertain, and our tools for modeling the future reflect this baked-in contingency.

I think throughout number eight, we’ve also seen another resonant LDS point, that our embodied theology achieves transcendent kinship. Through subtle communities of pluralistic mammals living in thick relationships one with another. Everything we’ve talked about: Lebachevsky, Markov, Komogodov, Yushchenko. Those are not just four names, and if it comes up in Q<unk>A, L1, M2, K3, and Y4 are simpler ways to remember them. But in fact, each of these rest on communities or embodied teams and relationships that have mostly elided for the sake of time today. Yushchenko’s teams of women and men programmers, many of them had survived World War II, only by hiding, only to find work because many, so many of the men did not return from the Eastern Front. It’s predicted that if you were a male born in 1924, of your five male peers in kindergarten, you alone would be alive at the end of the front. It’s just staggering loss. And so it’s in this space that surprising things happen. Like a reclamation of women’s labor and the their intelligence and the ways that they have created programming today. Right? The surprising contingencies of history is cut in all sorts of directions here.

Komogorov, too, in the scrum of life. Is living in conversations with colleagues near and far. He’s being accused, villainized for having published in German, which was the science of the language at the time, in open networks, living networks with his colleagues. The Markov chain takes this whole ethos of the social revolution in which he’s living. Simply, machine learning has long hosted its own type of low-grade communitarian kinship and sentience.

So, here are my great expectations and conclusion of for AI. I envision a future of artificial intelligence that’s ringed with women and men. kinship and care, queer relationships and uneasy plural futures. I envision a future very much part of the past and present, but a bit more conscientious, more loyal to those living debts, and more caring to those who use statistical tools for AI. You know that old phrase, AI won’t replace your job, but someone using AI will? I think it’s halfway there. I think, or rather, it misses the starting point. It’s that AI already is people using statistical tools. And that’s good if modest news. It’s people all the way down. AI, in the title of Norbert Wiener’s book, is The Human Use of Human Beings. Finally, last moment.

I recognize that some of what I’m saying here may have a kind of like weird uncle vibe, if you know what I mean. Like you gather together with your family over Thanksgiving, and there’s that one guy who insists on saying the wrong things out loud. And you’re all like, ugh. I get it. That’s how I feel about a lot of the Soviet materials too. And perhaps this is also a common lesson of the social media era in which we live, as well as a gentle inversion of the anthropomorphic principle, anthropic principle. Which is, I think that we could recognize that each of us have a bit of that weird aunt or uncle in each of us. And that by modeling how the history, the present, and the future of AI has been more plural, more uncertain, and more modest than we might have otherwise predicted, we can see in the weird aunt and uncle at the other side of the table of AI an uncanny reflection of ourselves, and thus respond with more generosity. Hospitality and community, precisely like I see embodied here, to the larger Human Family Project, and indeed breakout, set new place settings, and welcome to the ongoing feast of people using statistical tools. A worldwide family in these AI latter days.

Thank you.