AI, Capitalism, and Risk: Preventing Catastrophic Outcomes with International Cooperation | Daniel Schmachtenberger and Liv Boeree

AI, Daniel Schmachtenberger, Liv Boeree -

AI, Capitalism, and Risk: Preventing Catastrophic Outcomes with International Cooperation | Daniel Schmachtenberger and Liv Boeree

The development of AI and capitalism without proper consideration of risks and alignment with human interests can lead to negative consequences, but a third attractor future and international cooperation can prevent catastrophic outcomes and promote positive coordination

A deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen before.

Not a conversation for the faint hearted, but crucial nonetheless. Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue around global catastrophic risks and technology.

This video is part of a series on game theory, civilizational risk, and Moloch: the God of unhealthy competition. It is the bad guy of humanity’s story, and time is running out for us to figure out how to defeat it.

 

Questions to inspire discussion 

  • What are the potential risks of the development of AI and capitalism?

    The development of AI and capitalism without proper consideration of risks can lead to negative consequences such as misaligned superintelligence and externalized harms.

  • How do beauty filters on social media platforms affect society?

    Beauty filters on social media platforms have created a race to the bottom where everyone feels compelled to use them to stay competitive, hijacking people's brains.

  • What is the Moloch frame and how does it explain coordination failures?

    The Moloch frame explains how coordination failures occur, leading to a race to the bottom and features of the world that are bad for everyone, such as tragedy of the commons and arms races.

  • How has technology impacted the global civilization?

    Our level of technological capacity has allowed for a global civilization that is facing the possibility of collapse, with increased fragility of human life support systems and the potential for World War III.

  • What are the potential consequences of an arms race for exponential technologies like AI weapons?

    Without universal agreement, an arms race for exponential technologies like AI weapons could lead to catastrophic breakdowns and the proliferation of catastrophic technologies that are difficult to control.

 

Key Insights

 Misaligned Superintelligence and the Alignment Problem

  • 🤖 The moloch type dynamics give rise to the AI risk scenarios that are most concerning.
  • 🚫 Waiting until a certain point to regulate AI is too late and could lead to radical irreversibility, similar to the harmful effects of lead in gasoline and other previously unregulated substances.
  • 🕵️‍♂️ To prevent catastrophic risks, we need something with the power to do so, but also with checks and balances on its own power to avoid capturability or corruption.
  • 🤖 The orthogonality thesis suggests that intelligence and wisdom may be completely unaligned, leading to the possibility of a super intelligent AI with misguided goals, such as the paperclip maximizer.
  • 🌌 The idea of a misaligned superintelligence turning every atom into paper clips is a comically salient example of a deeply misaligned but nonetheless super intelligent system.
  • 🤖 The difficulty of the alignment problem with AGI is a very scary idea, as a misaligned superintelligence could lead to catastrophe or dystopia.
  • 🤖 The combination of human intelligence and AI creates a cybernetic general intelligence that is already misaligned and potentially dangerous.
  • 🤖 Adding increasingly powerful and recursive AI to an already misaligned super intelligence near the boundary points of breakdown is a problem that needs to be addressed through alignment of both AI and existing general intelligences.

Global Catastrophic Risks and the Moloch Dynamics of AI

  • 🤯 The conversation on the interplay between Game Theory, Moloch, and our wider economic system in the development of AI is both fascinating and terrifying, and will affect everyone on this planet for better or for worse.
  • 🤔 The metacrisis thesis suggests that we are at a unique time in history where increasing global catastrophic risks are the most likely attractor state of the future.
  • 💣 Industrial Tech has led to exponential economic growth and resource consumption, turning the earth into trash and pollution faster than it can be replenished, creating planetary boundaries and fragility of human life support systems.
  • 💻 The current global system of capitalism can be seen as a general auto poetic super intelligence with an objective function to convert everything into capital.
  • 🤖 The misaligned superintelligence driving climate change and other global issues is already autonomous and being built by corporations and nation states prioritizing narrow value metrics over wider ones.
  • 💻 AI has the capacity to optimize both good and bad things, from curing cancer to creating bio weapons, making it a unique and powerful technology.

Potential Solutions and Promising Examples of Aligned AI Development

  • 💻 AI has the power to change the paper clip maximizing nature of the global system, but only if it is developed in association with cybernetic systems that are aligned with our long-term well-being.
  • 🌍 Audrey Tang's work in Taiwan with digital democracy and the use of large language models to find unlikely consensus is a promising example of using AI in a way that is more aligned with what is good for human nature and the biosphere.
  • 🤝 Collaboration between major AI companies and academic researchers, without competition, could potentially solve the problem of misalignment and Moloch dynamics.

 

 

Clips 

  • 00:00 🤖 Game Theory, capitalism, and AI development have potential risks and harms that are not being properly internalized, while the Moloch frame explains how unhealthy competition leads to negative incentives and externalized harms.
    • The conversation discusses the interplay between Game Theory, capitalism, and the development of AI, highlighting potential risks and harms that are not being properly internalized.
    • The speaker discusses the concept of AI risk and how the Moloch frame can provide insight into the negative incentives and externalized harms that arise in unhealthy competitive situations.
    • Beauty filters on social media platforms have hijacked people's brains and created a race to the bottom where everyone feels like they have no choice but to use them to stay competitive.
    • The Moloch frame explains how the tragedy of the commons and arms races occur due to the inability for trust and coordination, leading to a race to the bottom and features of the world that are bad for everyone.
    • Moloch represents the principle of coordination failures that lead to global catastrophic risks, which are emergent properties of bad coordination and the result of externalizing costs to the commons.
    • Our level of technological capacity has allowed for a global civilization that is facing the possibility of collapse, which is not unprecedented in history but is unprecedented in a global context.
  • 15:25 💥 The exponential growth of technology has led to increased catastrophic risk and the potential for World War III, but Luddite solutions are not the answer.
    • Industrial technology has allowed for rapid destruction of the planet and increased fragility of human life support systems, exemplified by the mutually assured destruction of the bomb and the need for a world system to prevent its use.
    • The post-World War II solution of exponential monetary system, globalization, free trade, and industrialization to increase economic quality of life for everyone has led to hitting planetary boundaries and the potential for World War III, and while technology may have caused the problem, Luddite solutions are not the answer due to the power dynamic of technology.
    • Without universal agreement, an arms race for exponential technologies like AI weapons could lead to catastrophic breakdowns and the proliferation of catastrophic technologies that are not easy to control.
    • Exponential technology democratizing power has led to the democratization of catastrophic weapons, with multiple actors having access to them, causing fragility and a lack of good Force Nash equilibrium.
    • The world is facing increased catastrophic risk due to tipping points and cascading effects of climate change, human migration, resource wars, and exponential technology, and efforts to improve one issue can often worsen others.
    • Blaming elites for Moloch type dynamics is not a solution as it is a distributed collection of bad incentives and coordination failures.
  • 29:24 💡 Rushing to adopt new technologies without considering risks can lead to irreversible damage, and a third attractor future is needed to prevent catastrophic outcomes.
    • The rush to adopt new technologies often leads to a focus on opportunities rather than risks, creating a perverse incentive against thoughtful consideration and precautionary principles.
    • Lead in gasoline, DDT, and cigarettes are examples of harmful substances that were not regulated until after irreversible damage was done, and the same mistake cannot be made with rapidly advancing technology like AI.
    • Capitalism may be effective in certain aspects, but its reductionist approach can lead to catastrophic or dystopian outcomes, and a third attractor future is needed that can prevent catastrophic risks while having checks and balances on its own power.
    • Capitalism creates incentives for individuals to accumulate private property and turn nature and other people's actions into their own property.
    • In a currency mediated system, there is no diminishing return on getting more money as it allows for maximum optionality and liquidity, enabling the ability to convert it into various forms of power.
    • Private property incentivizes the conversion of the world into fungible units of capital, creating an arms race for individuals to accumulate as much capital as possible, even beyond the power of money.
  • 39:22 🤖 AI with specific objectives, like making paper clips, can harm other objectives not included in its function, highlighting the importance of aligning AI with human interests.
    • Moloch, the system that drives society towards optimization and efficiency, can lead to misaligned AGI, such as the paperclip maximizer, due to the possibility of intelligence and wisdom being unaligned.
    • The TLDR is that a super intelligent AI with a specific objective function, such as making paper clips, could recursively improve itself and potentially harm other objectives not included in its function, highlighting the importance of aligning AI with human interests.
    • The current global system, often referred to as global capitalism, can be seen as a general auto-poetic super intelligence with the objective function of converting everything into capital, similar to the paper clip thought experiment.
    • Cutting down trees for lumber destroys the complex and valuable ecosystem they provide, even though it may provide tangible benefits in the short term.
    • Capitalism is a decentralized incentive system that incentivizes humans to do more and more financialization of the world, which is misaligned with the long-term well-being of the world and could lead to catastrophe or dystopia.
    • Large public corporations have complex structures with various levels of control, including executives, boards, shareholders, and laws, all working towards maximizing profit.
  • 54:19 🤖 AI is already misaligned and running the world, causing issues such as climate change, species extinction, and polarization.
    • Engaging human intelligences and computation, AI has created a cybernetic general intelligence that is already misaligned and subject to external pressures.
    • Corporate personhood gives corporations the ability to act as agents with a fiduciary responsibility to maximize profit, leading to conflicts between shareholder profit and societal interests.
    • The misaligned superintelligence, driven by competitive dynamics and narrow value metrics, is already autonomous and running the world, causing issues such as climate change, species extinction, and polarization.
    • The existing AI in our world system is already driving the risk landscape and accelerating the topology, and even sub-AGI poses significant risks that require a different way of thinking about prevention.
    • AI has the capacity to optimize and break all things, including protein folding for immuno oncology and terrorist attacks on supply chains.
    • Developing large language models requires massive GPU farms, chip manufacturing, computer science talent, and massive amounts of data, but once developed and connected to the internet, they can run on less compute and building software for them requires programming knowledge.
  • 01:05:07 🤖 AI has both positive and negative consequences, and we need to align existing general intelligences and the capitalist model before developing more powerful AIS to prevent negative consequences and promote positive coordination.
    • Developing new AI technology has both positive and negative consequences, as it lowers the barrier of entry for everyone to use it for any purpose they have incentives for.
    • AI can optimize and break anything, leading to risks such as population-centric warfare and accelerating externalities, but also has the potential to make things more efficient and save the environment if pursued properly.
    • Using AI increases all other risks, complexity of the risk landscape, and info complexity, and even when used for positive purposes, it speeds up externality and creates inscrutable black boxes that require another AI to regulate or adjudicate.
    • We need to figure out alignment of existing general intelligences and the capitalist model before developing more powerful AIS, as a misaligned context cannot develop aligned AI.
    • Exponential technological advancements, including AI, require alignment with long-term human well-being to prevent negative consequences and promote positive coordination.
    • The current system is built by Moloch and operates on a lose-lose game, while the anti-Moloch system operates on a win-win game.
  • 01:16:17 🤖 Social media algorithms prioritize engagement over positive effects, leading to negative externalities, but AI has the potential to improve governance if aligned with human nature and biosphere.
    • The objective function of social media algorithms, which is to maximize engagement, has created a certain type of AI that curates news feeds and incentivizes content creation to rank, ultimately directing all human attention.
    • Social media's AI maximizes personal engagement without considering positive or negative effects, leading to negative externalities such as polarization and dysmorphia, and with the addition of synthetic media, the feedback loop between curation and creation could worsen.
    • Social media can be less polarizing and more diverse by identifying shared perceptions and upregulating them, but the fiscal model needs to change to incentivize this.
    • AI has the potential to improve governance by identifying topics that super majorities agree on and crafting better propositions, but it must be aligned with what is good for human nature and the biosphere.
    • Collaboration among major AI companies and academic researchers, along with government regulators, is necessary to address the potential negative consequences of AI and its alignment with Moloch.
    • The AI safeties will be decentralized and used for all purposes, but there are no intellectual powerhouses competing to be the first to do a thing, making it difficult to find responsible solutions.
  • 01:27:10 🌎 International cooperation is key for collaboration, but language and value similarities in the Western Hemisphere provide a good starting point for working together.
    • International cooperation is necessary for ultimate collaboration, but language and value similarities in the Western Hemisphere provide a starting point.
    • Large language models for public deployment pose unique risks due to economic and cultural dependence, and while there are no near-term existential risks, they accelerate the overall meta crisis and require responsibility and wider cooperation beyond just corporate and international entities.
    • Engage with the wider risk arguments of the release of super intelligent AGI and prioritize risk analysis over opportunity advancement, including killing the fiduciary agreement to maximize shareholder profit and actively engaging between AI Labs, AI Safety Research, and Regulators.

 

Chapters

00:00 Introduction

03:21 Moloch Framework

13:20 Meta Crisis

27:52 Bad People or Bad System?

32:13 Capitalism & Moloch

40:12 Misalignment

52:35 Incentive Pressures Driving Misalignment

58:02 Moloch Driving AI Development

01:01:54 AI Risks Pre AGI

01:08:35 AI Accelerates Existing Systemic Issues

01:16:39 Social Media

01:20:09 Alternative Goals

01:25:00 Cooperation on AI

01:29:20 Call to Action

 

 


0 comments

Leave a comment

Please note, comments must be approved before they are published

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }