Collective Intelligence vs. Artificial Intelligence

Evan Hadfield argues that sufficiently advanced AI poses existential risk to human flourishing—and that such AI has existed since 1844 in the form of corporations. Drawing parallels between corporate structures and AI safety concerns, he contends that corporations exhibit autonomous decision-making, goal-oriented behavior toward shareholder profit, and misalignment with human values. Hadfield points to climate change and biodiversity loss as evidence that we are already experiencing the "paperclip problem," where corporate optimization for profit overrides human welfare. He concludes by advocating for collective intelligence and democratic movements as the solution, citing humanity's long history of wrestling with artificial institutional superstructures.

Evan Hadfield
Evan Hadfield

Evan Hadfield is a speaker and thinker exploring the intersection of artificial intelligence, existential risk, and Mormon theology. He presents a unique perspective on AI, arguing that sufficiently advanced AI poses a significant threat to human flourishing. Hadfield’s work delves into the philosophical and ethical implications of AI, particularly concerning the alignment of AI values with human values, the potential for loss of control, and the concentration of power. He challenges conventional understanding by suggesting that a form of AI has existed since 1844 in the form of corporate structures. Hadfield’s presentation at the MTAConf 2024 focused on identifying potential risks and solutions related to AI and its effect on humanity. His transhumanist convictions come through in the practical steps and approaches he proposes to address these challenges.

Transcript

Evan Hadfield

Thank you, thank you.

Evan Hadfield

Alright, so in this talk, I want to convince you all of three main things. First, that sufficiently advanced AI presents existential risk to human flourishing. That such AI already exists and importantly has since 1844 and it is already hurtling humanity towards catastrophe. And I guess there’s a fourth thing I do want to say that there is a solution.

Evan Hadfield

So first, let’s address the AI risks. I’m going to breeze through this because I think this has been thoroughly covered by previous talks. There’s a concern about misalignment with human values based on what we train these AIs. Potentially being a black box that we can’t understand and therefore don’t know if it’s actually being beneficial. There’s a challenge in keeping it under human control. Because of a risk of a superintelligence that exceeds human capacities in every single knowledge domain, there’s a value-loading problem. How do we could possibly give it the complex array of subjective human values. And if we oversimplify, it could be mistrained. There’s of course a risk of the concentration of power depending on who wields this technology. And there’s a race to the bottom as companies and organizations race to develop cutting edge AI, they cut corners and increase our exposure to possible dangerous outcomes. So we’re, I think, pretty well agreed that there is definitely an existential risk to AI.

Evan Hadfield

But do we agree that it already exists? So first, I would posit what is an AI? I would say it’s any system that exhibits Autonomous decision making without human intervention. It can learn and adapt. It can process large amounts of data efficiently. And it exhibits goal-oriented behavior to achieve specific outcomes. And it can interact with the environment and users in order to actually affect a meaningful difference in the world. So I would say AI already exists and has since 1844.

Evan Hadfield

But what happened in 1844? Well, interestingly, about 150 days after Joseph Smith delivered a very important discourse on the nature of God and intelligence, the United Kingdom passed the Joint Stock Companies Act. Briefly, the Joint Stock Companies Act did the following. It simplified the incorporation process for companies. It introduces the concept of limited liability, so investors and corporations weren’t held personally liable for assets. It introduced the concept of corporate personhood, where corporations could buy and own property and engage in contracts in their own name. And it founded the modern corporate governance structure where management interests are aligned with shareholders.

Evan Hadfield

You could say, in other words, It exhibits autonomous decision making, learning and adaptation, processing large amounts of data, and goal-oriented behavior towards maximizing shareholder profit. and interactions with employees, users, governments, customers in exerting a huge influence on the world.

Evan Hadfield

Let’s consider now how this corporate AI structure measures against our AI safety checklist after two centuries. And I think you know where this is going. They are misaligned with human values. They are opaque in their decision making to their own employees, to the public, to shareholders and even executives. They are very difficult to keep under human control because corporate AIs can wield vast amounts of resources and power to overwhelm any sort of sense of human control or capability of understanding. They are perfectly imperfect at understanding human values, and they are, as this picture beautifully captures, very concentrated in the hands of a few individuals. And of course, yeah, corporate AIs are constantly engaged in a race to the bottom.

Evan Hadfield

I’ve argued that AI in this form already exists. Is it hurtling humanity towards catastrophe? I would say a resounding yes. The yellow line, briefly, this graph is just showing the world’s getting way hotter very quickly. You may not know this, but greenhouse gas emissions actually take years or even decades to fully manifest. Volume of warming that is expected from a set amount of fossil fuel emissions. And the latest climate models suggest and well match the existing data. that we are effectively locked in to four degrees of Celsius warming over the next century based on fossil fuel emissions that have already happened. And experts agree that human civilization is simply incompatible with this amount of warmth.

Evan Hadfield

Biologists say that we are currently experiencing this world’s sixth mass extinction event. Only this one that’s happening with us and because of us is actually happening faster than the last one that wiped out the dinosaurs 65 million years ago. I didn’t expect to get emotional on that one. Basically, we are already experiencing Nick Bostrom’s paperclip problem, only the paperclip is shareholder profit. We can point to a lot of factors bringing us to this precipice, but one particular culprit definitely stands out. the degradation of human well-being in favor of artificially created corporate values of profit misaligned from human values, and the concentration of wealth and power facilitated by these corporate AIs.

Evan Hadfield

I want to take you on a brief thought experiment detour. Many of you probably know that Exxon, the world’s largest oil company, in the 1970s engaged in climate research to study the impacts of fossil fuel emissions. and their models back then predicted the impacts we are seeing today. Of course, they obfuscated and hid those studies. But you may not know that also in the 1970s, Exxon decided to try briefly to compete with IBM in developing high-tech office supplies. These are some dated ads, I will say, particularly this one, which is particularly ominous with the nature of this particular talk. And in 1984, supposedly, this venture seemed to be not bearing fruit and they closed it down. But

Evan Hadfield

Suppose that in the nineteen seventies they actually developed a secret, super intelligent digital AI system that could take in all the context of the world’s combined digital output and recommend a best course of action To maximize profit and longevity of the company while safely having no direct control over any other hardware technology. A super powerful AI, we have been warned, could influence people to take certain actions even without having direct access to resources or systems beyond its own servers. With the right language of persuasion, it might even shape public opinion, cover up scientific findings, bias research, motivate policymakers to make favorable legislation. and convince thousands or even millions of humans to do its bidding, even to the detriment of their own survival. Are we able to confidently say that this hypothetical world of a super tele super intelligently empowered ExxonMobil would look all that different from ours today.

Evan Hadfield

Admittedly, corporations are not the only nor the first human-created autonomous goal-oriented, misaligned, superintelligent structures. Arguably, institutional conceptual superstructures have existed for centuries, possibly even millennia, that have been actively working at an accelerated clip over time towards our own extinction. Yet, if we extend our frame of reference beyond digital AI, we find a long history of struggle for safety. From and safety within these artificially intelligent structures. From Roman slave revolts to medieval peasant uprisings, maroon communities have escaped and freed slaves throughout the American South and Caribbean. Indigenous resistance against settler encroachment, European revolutions establishing the world’s first liberal democracies, labor movements to establish reasonable working conditions and fair pay, expansion of the right to vote to virtually every Every adult in modern-day democracies over the last century, present-day Zapatista, Rojavan, Kurdish liberation movements, and environmental activists protecting our planetary commons, as well as organizations today defending our digital commons.

Evan Hadfield

What all these movements have in common is the power of collective intelligence. It is the reason for being of democratic movements to wrestle with new technologies of institutional superstructures or artificial intelligences to render them safely or as safely as possible. Under the distributed power of people. Inevitably, this process does not end, and it cannot end, as technologies evolve and we encounter new frontiers of democratic struggle.

Evan Hadfield

We find ourselves in a more extreme disruptor of human flourishing with digital AI. It is all the more imperative that AI safetyists Work with communities, leaders, activists, and organizers to engage with the long history of expertise and experience that comes from collective struggle.

Evan Hadfield

As Joseph Smith said, God Himself, finding He was in the midst of spirits and glory, because He was more intelligent, saw proper to institute laws whereby the rest could have a privilege to advance. May we see it proper to institute human laws over new technologies whereby we may all advance. All AI are a power to eliminate all people. Until we have extended all power to all the people. Thank you.

Speaker 2

Thank you, every