Daniel Schmachtenberger & Liv Boeree on AI, Moloch & Capitalism

Capitalism, Daniel Schmachtenberger, Liv Boeree, Post-Capitalism, Synthetic Minds -

Daniel Schmachtenberger & Liv Boeree on AI, Moloch & Capitalism

The development of AI and capitalism without proper consideration of risks and alignment with human interests can lead to negative consequences, but a third attractor future and international cooperation can prevent catastrophic outcomes and promote positive coordination

 

Questions to inspire discussion

Understanding Moloch and AI Risks

🤖 Q: What is the Moloch framework?
A: The Moloch framework is a game theory concept that explains how misaligned incentives and coordination failures lead to negative externalities and catastrophic outcomes, serving as a tool for understanding AI risks and systemic issues.

🏆 Q: How does Moloch relate to AI development?
A: AI development is driven by Moloch dynamics, where companies feel pressured to keep up with competitors, often sacrificing values and externalizing harms, leading to a race to the bottom despite knowing better.

🌍 Q: What is the Meta Crisis?
A: The Meta Crisis is a unique point in history where global catastrophic risks are increasing due to industrial technology, hitting planetary boundaries, creating fragility, and escalating to violence on an unprecedented scale.

AI Alignment and Risks

🎯 Q: What is the AI alignment problem?
A: The AI alignment problem is the challenge of ensuring that an autonomous general intelligence is aligned with human interests, intentions, and values, which is incredibly difficult to specify in a computational way.

🧠 Q: What is the orthogonality thesis in AI?
A: The orthogonality thesis states that it's possible to get very good at optimizing without getting good at picking good goals, which is already evident in our world where we're better at creating tech than creating a sensible world.

💥 Q: Why are AI risks unique?
A: AI risks are unique because AI can optimize across all domains, including cyber, nuclear, biological, and informational threats, and once developed, it can be widely available and used for all purposes, including military and terrorist applications.

Addressing AI Challenges

🛡️ Q: What does the AI risk community suggest for AI development?
A: The AI risk community suggests figuring out alignment before developing more powerful AI systems, and also addressing the alignment of existing general intelligences (e.g., cybernetic systems, capitalist model).

🚫 Q: How does the precautionary principle apply to AI development?
A: The precautionary principle suggests that when there is radical uncertainty, maximum consequentiality, and irreversibility, we should take a more cautious and responsible approach rather than going as fast as possible.

🤝 Q: What is the "anti-moloch" direction for AI development?
A: The "anti-moloch" direction involves using info technologies, including AI, to align with human nature and the biosphere, rather than serving Moloch, aiming for win-win outcomes.

Systemic Issues and Solutions

📊 Q: How do coordination failures contribute to global risks?
A: Coordination failures are a key driver of global catastrophic risks, including climate change, species extinction, and biodiversity loss, arising from bad incentives that push agents to sacrifice values to win.

🌐 Q: What was the post-World War II solution to prevent mutually assured destruction?
A: The post-World War II solution was to create a world system that doesn't use new technologies for strategic advantage, but this is now untenable in a multipolar world where tech equals power.

🔬 Q: What is the Consilience Project?
A: The Consilience Project is a research organization aimed at improving public sense-making around global catastrophic risks and technology, serving as a potential resource for understanding and addressing AI risks.

AI's Impact on Society

💻 Q: How does AI lower the barrier of entry for various applications?
A: AI radically lowers the barrier of entry for various applications because it can be run on less compute than it was trained on, allowing widespread use even if safety parameters are in place.

📱 Q: How does AI accelerate existing systemic issues?
A: AI accelerates existing systemic issues by amplifying the topology of the existing risk landscape, potentially leading to a misaligned superintelligence that is autonomous and serves itself.

🗣️ Q: How does AI impact social media?
A: AI impacts social media by enabling more sophisticated manipulation of user behavior, potentially leading to increased polarization, misinformation, and addiction to digital platforms.

Addressing the Moloch Dynamic

🔄 Q: How can we address the Moloch dynamic in technology development?
A: Addressing the Moloch dynamic requires rethinking incentives, fostering cooperation on AI development, and prioritizing alignment with human values and the biosphere over short-term competitive advantages.

🌟 Q: What is the "third attractor solution"?
A: The "third attractor solution" is a future that has the power to prevent catastrophic risk but also needs checks and balances on its own power to prevent capturability or corruption, avoiding both catastrophes and dystopias.

🤔 Q: Why is it important to improve public sense-making around AI and global risks?
A: Improving public sense-making is crucial because it enables better collective decision-making, policy formation, and individual actions to address the complex challenges posed by AI and other global risks.

Call to Action

🚀 Q: What can individuals do to address AI risks?
A: Individuals can educate themselves on AI risks, support responsible AI development, advocate for better regulations, and participate in public discussions to shape the future of AI technology.

🔍 Q: How can we promote more thoughtful consideration in AI development?
A: Promoting thoughtful consideration in AI development involves advocating for the precautionary principle, supporting research on AI alignment, and encouraging transparent and ethical practices in AI companies.

📚 Q: Where can people learn more about these topics?
A: People can learn more through resources like the Consilience Project, academic papers on AI alignment, and podcasts and talks by experts in the field of AI safety and global catastrophic risks.

🌈 Q: What is the ultimate goal in addressing AI risks and Moloch dynamics?
A: The ultimate goal is to create a future where powerful technologies like AI are aligned with human values, promote win-win outcomes, and contribute to the flourishing of humanity and the biosphere.

 

Key Insights

AI and Existential Risk

🤖 Moloch, the "god of negative sum games," drives selfish actions that externalize harms, making AI development a potential negative-sum game.

🌍 The Meta Crisis thesis posits we're at a unique time with increasing global catastrophic risks and probabilities, making catastrophe the most likely future attractor state.

💥 Multiple types of catastrophic weapons, many actors possessing them, no good Force Nash equilibrium, and planetary boundaries create a world of increased catastrophic risk.

🧠 The alignment problem challenges specifying a superintelligent system's objective function to align with human values, which is computationally difficult.

Systemic Issues and Incentives

💼 Moloch is already an autonomous, misaligned superintelligence driving climate change and species extinction through the interactions of corporations and nation-states.

📈 Moloch's objective function is to convert the world's resources into capital, prioritizing optionality over real value, simplifying a complex world.

🏢 Fiduciary responsibility to maximize profit makes corporations obligate sociopaths, exploiting opportunities within legal confines and influencing laws to align with their interests.

🔄 The multipolar trap dynamic creates a meta-cybernetic superintelligence (Moloch) that's difficult to regulate or control due to constant adaptation.

AI Development and Risks

🚀 The AI risk community emphasizes figuring out alignment before developing more powerful AI systems, but existing general intelligences also need alignment.

⚖️ Experts are radically uncertain but maximally consequential about AI risks, suggesting that rapid development is not the optimal approach.

🛑 The precautionary principle suggests paying attention to expert disagreements and radical uncertainty about AI risks when considering responsible development.

🔬 Alignment of existing general intelligences, including the capitalist model, is necessary because a misaligned context cannot develop aligned AI.

Societal and Technological Impact

📱 Social media and beauty filters exemplify Moloch dynamics, creating negative externalities through competitive pressures.

🌡️ Climate change and pollution are results of Moloch-driven negative-sum games in industrial development.

🏭 Cumulative effects of industrial tech bring us to planetary boundaries and increasing fragility, with more people dependent on vulnerable global systems.

💣 The post-World War II world was created to prevent mutually assured destruction, but this doesn't work with multiple actors possessing catastrophic weapons.

Potential Solutions and Considerations

🎯 The orthogonality thesis states that optimizing ability doesn't necessarily correlate with choosing good goals, as seen in current technological progress.

🤝 Developing aligned cybernetic systems focused on long-term human well-being is crucial for changing the Moloch system.

🏆 The anti-Moloch or win-win direction of AI development is needed to change the system, requiring alignment of existing general intelligences.

🔄 A third attractor solution beyond catastrophes and dystopias is needed, balancing power to prevent risks with checks against corruption.

Broader Implications

🧩 The Moloch framework provides insight into the AI risk landscape, unifying different risk categories and informing protective strategies.

📊 Moloch-type dynamics give rise to concerning AI risk scenarios like AGI misalignment, clarifying underlying system dynamics.

🔍 Understanding Moloch as a misaligned superintelligence already present helps frame existing global issues beyond future AI risks.

🌐 The AI risk community's focus on alignment highlights the need to consider broader systemic issues in technological development.

 

#SyntheticMinds

XMentions: @HabitatsDigital @DanielSchmacht1 @Liv_Boeree

 

Clips

  • 00:00 🤖 Game Theory, capitalism, and AI development have potential risks and harms that are not being properly internalized, while the Moloch frame explains how unhealthy competition leads to negative incentives and externalized harms.
    • The conversation discusses the interplay between Game Theory, capitalism, and the development of AI, highlighting potential risks and harms that are not being properly internalized.
    • The speaker discusses the concept of AI risk and how the Moloch frame can provide insight into the negative incentives and externalized harms that arise in unhealthy competitive situations.
    • Beauty filters on social media platforms have hijacked people's brains and created a race to the bottom where everyone feels like they have no choice but to use them to stay competitive.
    • The Moloch frame explains how the tragedy of the commons and arms races occur due to the inability for trust and coordination, leading to a race to the bottom and features of the world that are bad for everyone.
    • Moloch represents the principle of coordination failures that lead to global catastrophic risks, which are emergent properties of bad coordination and the result of externalizing costs to the commons.
    • Our level of technological capacity has allowed for a global civilization that is facing the possibility of collapse, which is not unprecedented in history but is unprecedented in a global context.
  • 15:25 💥 The exponential growth of technology has led to increased catastrophic risk and the potential for World War III, but Luddite solutions are not the answer.
    • Industrial technology has allowed for rapid destruction of the planet and increased fragility of human life support systems, exemplified by the mutually assured destruction of the bomb and the need for a world system to prevent its use.
    • The post-World War II solution of exponential monetary system, globalization, free trade, and industrialization to increase economic quality of life for everyone has led to hitting planetary boundaries and the potential for World War III, and while technology may have caused the problem, Luddite solutions are not the answer due to the power dynamic of technology.
    • Without universal agreement, an arms race for exponential technologies like AI weapons could lead to catastrophic breakdowns and the proliferation of catastrophic technologies that are not easy to control.
    • Exponential technology democratizing power has led to the democratization of catastrophic weapons, with multiple actors having access to them, causing fragility and a lack of good Force Nash equilibrium.
    • The world is facing increased catastrophic risk due to tipping points and cascading effects of climate change, human migration, resource wars, and exponential technology, and efforts to improve one issue can often worsen others.
    • Blaming elites for Moloch type dynamics is not a solution as it is a distributed collection of bad incentives and coordination failures.
  • 29:24 💡 Rushing to adopt new technologies without considering risks can lead to irreversible damage, and a third attractor future is needed to prevent catastrophic outcomes.
    • The rush to adopt new technologies often leads to a focus on opportunities rather than risks, creating a perverse incentive against thoughtful consideration and precautionary principles.
    • Lead in gasoline, DDT, and cigarettes are examples of harmful substances that were not regulated until after irreversible damage was done, and the same mistake cannot be made with rapidly advancing technology like AI.
    • Capitalism may be effective in certain aspects, but its reductionist approach can lead to catastrophic or dystopian outcomes, and a third attractor future is needed that can prevent catastrophic risks while having checks and balances on its own power.
    • Capitalism creates incentives for individuals to accumulate private property and turn nature and other people's actions into their own property.
    • In a currency mediated system, there is no diminishing return on getting more money as it allows for maximum optionality and liquidity, enabling the ability to convert it into various forms of power.
    • Private property incentivizes the conversion of the world into fungible units of capital, creating an arms race for individuals to accumulate as much capital as possible, even beyond the power of money.
  • 39:22 🤖 AI with specific objectives, like making paper clips, can harm other objectives not included in its function, highlighting the importance of aligning AI with human interests.
    • Moloch, the system that drives society towards optimization and efficiency, can lead to misaligned AGI, such as the paperclip maximizer, due to the possibility of intelligence and wisdom being unaligned.
    • The TLDR is that a super intelligent AI with a specific objective function, such as making paper clips, could recursively improve itself and potentially harm other objectives not included in its function, highlighting the importance of aligning AI with human interests.
    • The current global system, often referred to as global capitalism, can be seen as a general auto-poetic super intelligence with the objective function of converting everything into capital, similar to the paper clip thought experiment.
    • Cutting down trees for lumber destroys the complex and valuable ecosystem they provide, even though it may provide tangible benefits in the short term.
    • Capitalism is a decentralized incentive system that incentivizes humans to do more and more financialization of the world, which is misaligned with the long-term well-being of the world and could lead to catastrophe or dystopia.
    • Large public corporations have complex structures with various levels of control, including executives, boards, shareholders, and laws, all working towards maximizing profit.
  • 54:19 🤖 AI is already misaligned and running the world, causing issues such as climate change, species extinction, and polarization.
    • Engaging human intelligences and computation, AI has created a cybernetic general intelligence that is already misaligned and subject to external pressures.
    • Corporate personhood gives corporations the ability to act as agents with a fiduciary responsibility to maximize profit, leading to conflicts between shareholder profit and societal interests.
    • The misaligned superintelligence, driven by competitive dynamics and narrow value metrics, is already autonomous and running the world, causing issues such as climate change, species extinction, and polarization.
    • The existing AI in our world system is already driving the risk landscape and accelerating the topology, and even sub-AGI poses significant risks that require a different way of thinking about prevention.
    • AI has the capacity to optimize and break all things, including protein folding for immuno oncology and terrorist attacks on supply chains.
    • Developing large language models requires massive GPU farms, chip manufacturing, computer science talent, and massive amounts of data, but once developed and connected to the internet, they can run on less compute and building software for them requires programming knowledge.
  • 01:05:07 🤖 AI has both positive and negative consequences, and we need to align existing general intelligences and the capitalist model before developing more powerful AIS to prevent negative consequences and promote positive coordination.
    • Developing new AI technology has both positive and negative consequences, as it lowers the barrier of entry for everyone to use it for any purpose they have incentives for.
    • AI can optimize and break anything, leading to risks such as population-centric warfare and accelerating externalities, but also has the potential to make things more efficient and save the environment if pursued properly.
    • Using AI increases all other risks, complexity of the risk landscape, and info complexity, and even when used for positive purposes, it speeds up externality and creates inscrutable black boxes that require another AI to regulate or adjudicate.
    • We need to figure out alignment of existing general intelligences and the capitalist model before developing more powerful AIS, as a misaligned context cannot develop aligned AI.
    • Exponential technological advancements, including AI, require alignment with long-term human well-being to prevent negative consequences and promote positive coordination.
    • The current system is built by Moloch and operates on a lose-lose game, while the anti-Moloch system operates on a win-win game.
  • 01:16:17 🤖 Social media algorithms prioritize engagement over positive effects, leading to negative externalities, but AI has the potential to improve governance if aligned with human nature and biosphere.
    • The objective function of social media algorithms, which is to maximize engagement, has created a certain type of AI that curates news feeds and incentivizes content creation to rank, ultimately directing all human attention.
    • Social media's AI maximizes personal engagement without considering positive or negative effects, leading to negative externalities such as polarization and dysmorphia, and with the addition of synthetic media, the feedback loop between curation and creation could worsen.
    • Social media can be less polarizing and more diverse by identifying shared perceptions and upregulating them, but the fiscal model needs to change to incentivize this.
    • AI has the potential to improve governance by identifying topics that super majorities agree on and crafting better propositions, but it must be aligned with what is good for human nature and the biosphere.
    • Collaboration among major AI companies and academic researchers, along with government regulators, is necessary to address the potential negative consequences of AI and its alignment with Moloch.
    • The AI safeties will be decentralized and used for all purposes, but there are no intellectual powerhouses competing to be the first to do a thing, making it difficult to find responsible solutions.
  • 01:27:10 🌎 International cooperation is key for collaboration, but language and value similarities in the Western Hemisphere provide a good starting point for working together.
    • International cooperation is necessary for ultimate collaboration, but language and value similarities in the Western Hemisphere provide a starting point.
    • Large language models for public deployment pose unique risks due to economic and cultural dependence, and while there are no near-term existential risks, they accelerate the overall meta crisis and require responsibility and wider cooperation beyond just corporate and international entities.
    • Engage with the wider risk arguments of the release of super intelligent AGI and prioritize risk analysis over opportunity advancement, including killing the fiduciary agreement to maximize shareholder profit and actively engaging between AI Labs, AI Safety Research, and Regulators.

-------------------------------------

Duration: 1:31:11

Publication Date: 2025-08-17T12:55:26Z

WatchUrl:https://www.youtube.com/watch?v=KCSsKV5F4xc

-------------------------------------


0 comments

Leave a comment

#WebChat .container iframe{ width: 100%; height: 100vh; }