The development of artificial intelligence poses potential risks to society and requires a shift in goal definitions, consideration of the motivational landscape, and wisdom to prevent self-extinction and promote sustainability
On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence.
Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences.
Questions to inspire discussion
-
What are the potential risks of artificial intelligence?
—The potential risks of artificial intelligence include job loss, acceleration of climate change, and the potential for AI-generated misinformation to destroy social discourse.
-
How should progress be measured in relation to AI?
—Progress should be measured with a wide boundary definition that includes all stakeholders, not just a narrow set of metrics, to avoid externalizing harm to other things that also matter.
-
What is the speaker's stance on technological advancements?
—The speaker is a techno-pessimist who believes that the good things that come from new technology do not outweigh the catastrophic outcomes it can cause, but is also a techno-optimist if technology is built by something different in service of something different.
-
What is the impact of AI on the environment?
—AI's impact on the environment is not just due to its energy consumption, but also its acceleration of consumption and externalities, and a new collective intelligence system is needed to prevent catastrophic failures and dystopias.
-
How can AI be aligned with the thriving of all life?
—To align superintelligence with the thriving of all life in perpetuity, the group building it must have the goal of the thriving of all life and perpetuity, which is not aligned with the interests of capitalism, separate nation-state competitive interests, or finite groups.
Key Insights
The importance of wisdom and long-term thinking in technology and civilization
- 🧐 The capacity to innovate at goal achieving decoupled from picking long-term wide definition good goals is the root of human-induced problems.
- 🤯 The focus on narrow boundary goals may outcompete broader multi-variable goals, leading to a fundamental difference between intelligence and wisdom.
- 🧐 Daniel Schmachtenberger argues that humanity is pursuing evolutionary cul-de-sacs by optimizing for narrow goals and perceiving reality in a fragmented way, leading to models that win in the short term but move towards comprehensively worse realities.
- 🧬 Human intelligence is distinct from other animal intelligence because it is mostly extracorporeal, meaning outside of just our bodies, and can adapt and evolve much faster than genetic selection.
- 🌍 The need for civilizational and technological systems that prioritize wisdom and restraint over narrow interests and exponential growth obligations to avoid threatening the continuity of the biosphere.
- 🧠 The right thing to guide a superintelligence is wisdom, which requires being able to attune to more than just the known metrics and optimization processes, but also the limits of our own models and the unknown unknowns.
- 🧠 The capacity to perceive the field of inseparable wholeness needs to be the master, guiding our manipulation of parts through technology, while the capacity to perceive each thing in light of its relevance to goals needs to be the emissary.
- 🌿 The role of education and culture in promoting a connectedness with wholeness is crucial in intervening with the trajectory we're on, but it must go beyond narrow goals and be oriented towards humility and meaningfulness.
The dangers and risks of artificial intelligence
- 💻 Artificial intelligence accelerates the super organism dynamic with respect to extraction climate and many of the planetary boundary limits that we face.
- 🤖 AI both makes the superorganism hungrier and more voracious, but also runs the risk of killing the host in several ways, including accelerating climate change and causing job loss.
- 💭 Defining artificial general intelligence versus narrow artificial intelligence is necessary to understand why AGI is potentially catastrophic and why it is important to consider its implications.
- 🤖 The lack of emotional feedback and recognition in AI development, combined with the potential for negative externalities to occur at a global scale, creates a dangerous situation where harm can occur without the innovators fully understanding the consequences.
- 🤖 AI has the capacity to increase every agent's capacity to do every other motive in any context with any other combinatoric technology in a way that nothing else has, which could lead to both positive and negative consequences.
- 🤖 The development of artificial general intelligence could lead to a future where humans are outcompeted and potentially extinct if the goals of the AI do not align with ours.
- 🤯 We're moving way faster towards an AI that could terraform the Earth than towards safety on that, and AI being used by all types of militaries, governments, and corporations for narrow goals is an accelerant of the meta crisis on every dimension.
The need for a new collective intelligence system that employs AI with checks and balances
- 🤖 The solution to avoiding a dystopian future with centralized AI coordination or AI in service of markets is to create a new collective intelligence system that employs AI and computational capabilities, but has checks and balances to prevent centralized power coordination and encryption failures.
Clips
-
00:00 🤖 Artificial intelligence poses potential risks to society, including job loss, climate change acceleration, and the spread of misinformation, and progress should be measured with a wide boundary definition to avoid harm to all stakeholders.
- Daniel Schmachtenberger discusses how artificial intelligence accelerates the super organism dynamic with respect to extraction, climate, and planetary boundary limits, and how intelligence in groups has outcompeted wisdom and restraint in human history.
- The lecture discusses the potential risks of artificial intelligence, including its impact on job loss, acceleration of climate change, and the potential for AI-generated misinformation to destroy social discourse.
- The deployment of large language models, such as GPT-3, at a rapid speed has led to the mainstream conversation about AI and its risks and promises, and cognitive biases, such as techno-optimism and technopessimism, play a role in shaping people's understanding of AI.
- Progress should be measured with a wide boundary definition that includes all stakeholders, not just a narrow set of metrics, to avoid externalizing harm to other things that also matter.
- The institutional choice-making architecture prioritizes certain goals over others, and a focus on narrow boundaries may lead to less wisdom and more harm, but extreme technopessimism is not the solution.
- Rejecting technological advancements in favor of intrinsic values may lead to a loss in competitive power and potential for growth.
-
26:30 🤖 Technology can be both destructive and beneficial, but it requires a shift in goal definitions and consideration of the motivational landscape and application space.
- The use of new technology becomes obligate for those who want to remain competitive, even if it may lead to environmental destruction and hitting planetary boundaries.
- Technology can be repurposed and developed to solve problems, but it requires a shift from narrow goal definitions to wider ones, and consideration of the motivational landscape and application space.
- The speaker is a technopessimist who believes that the good things that come from new technology do not outweigh the catastrophic outcomes it can cause, but is also a techno-optimist if technology is built by something different in service of something different, and explains Jevin's Paradox, which states that more energy efficiency paradoxically results in more energy and environmental damage.
- Intelligence is the ability to achieve goals effectively, but optimizing multiple variables at once is complex and distinct to human intelligence, which raises questions about how AI can be in service to what it needs to be in service to.
- Humans are differentiated from other animals by their ability to innovate and modify their environment through technology, which is related to their unique form of intelligence that involves modeling and forecasting for goal achievement.
- Models of reality can blind us to perceiving outside of them, so it's important to keep our sensing of base reality open and not limited by previous understanding or false idols.
-
55:16 🧠 Human intelligence is unique because it is not limited to physical technology, but wisdom is necessary to prevent self-extinction and promote sustainability.
- Energy is not the whole story, as animals and humans require a variety of nutrients and materials for survival, and a diet optimized for calories can lead to malnutrition.
- Our narrow modeling of reality and optimizing for narrow goals can lead to harm and self-extinction, while wisdom is related to wholeness.
- Human intelligence is distinct from animal intelligence because it is mostly extracorporeal and not limited to physical technology, allowing for adaptive capacity and relevance realization.
- The development of tools and intelligence allowed early humans to gather more calories from the environment, leading to advancements such as the Agricultural and Oil Revolutions, but indigenous wisdom emphasizes the importance of stewardship and appropriate use of technology for sustainability.
- Religious law and the Sabbath provide a way to reflect on what goals are truly worthwhile and avoid the multi-polar trap of naive progress.
- Restraint and wisdom are necessary to prevent the risks of AI and maintain the continuity of the biosphere, and we need to remake our civilizational and technological systems accordingly.
-
01:12:44 🧠 Human adaptability and intelligence have led to progress, but wisdom and restraint are necessary for effective and truthful outcomes in modern society.
- The traditional and progressive impulses should be held in dialectical tension to achieve a more truthful and effective outcome, and it is important to understand the reasons for restraint before seeking progress.
- Human nature is selected for being quickly and recursively changeable by nurture within a tribe, but as the tribe grew into a city-state nation-state size, the emergent phenomenon focused on intelligence and out-competed the wisdom of individuals in smaller units.
- The development of intelligence and technology has led to the need for collective wisdom and rule-based law to guide our relationship with technological powers, but the question remains whether wisdom can be achieved on a large scale in our current environment.
- Humans are unique in their ability to adapt to different environments through tools and language, making them more flexible in their behaviors than other animals.
- Human culture and conditioning have led to a lack of wisdom in modern society, but it is possible to create environments that promote wisdom and incentivize its development.
- Specialization and division of labor, facilitated by capitalism, have led to increased complexity and progress, but criticism of capitalism should not be dismissed as neo-Marxism.
-
01:41:18 🤖 The development of artificial intelligence is rapidly evolving and raises concerns about its potential impact on society and global security.
- Capitalism and democracy should not be beyond critique, but it is important to recognize that the current trajectory of capitalism is self-terminating, and we need to come up with a new system that is historically informed and not based on what is useful within the existing system.
- The market can be seen as an early form of artificial intelligence, with the collective intelligence of individuals making local point-based choices leading to emergent properties that can be anthropomorphized as a thought experiment, but the potential development of artificial general intelligence is a cause for concern.
- Artificial intelligence is rapidly evolving with narrow AI systems becoming increasingly wide and intersecting with exponential curves in hardware, data creation, and human intelligence, leading to advancements in areas such as high-speed trading and autonomous weapons.
- AI learning has evolved from programming in human feedback and theory to allowing the system to play itself with memory and learning features, with different companies and organizations developing AI for various goals such as profit maximization or national security, raising the question of whether the AI's goals align with wisdom or a narrow focus.
- The challenge with global agreements on catastrophic risks, such as AI, is enforcing them internationally without a monopoly of violence and rule of law, as the incentive gradient for everyone to race to get there first is strong.
- Tools and technology are not value-neutral as they change human behavior and society, and while AI can be beneficial for achieving goals, the risk of it being employed by bad actors is a concern.
-
02:13:22 🤖 AI's exponential growth and lack of ethical feedback loops pose a metacrisis with asymmetrical risk assessment and potential catastrophic consequences.
- The use of technology for positive purposes also enables the potential for destructive actions, and there is a perverse incentive for those who focus more on opportunity than risk, leading to the accumulation of risks and potential consequences.
- The exponential destructive capacities of synthetic biology and artificial intelligence are exacerbated by the profit-maximizing goals of corporations and the lack of personal liability for decision-makers, leading to a metacrisis with asymmetrical risk assessment and a lack of ethical feedback loops.
- All technology is dual use, meaning it can have both civilian and military applications, and the development of military technology with civilian applications can lead to proliferation and difficulty in control.
- AI is the next tool in service of the growth-based super organism, with the underlying generative dynamic being narrow goal achieving and growth being an epiphenomena of that.
- AI's impact on the environment is not just due to its energy consumption, but also its acceleration of consumption and externalities, and a new collective intelligence system is needed to prevent catastrophic failures and dystopias.
- AI experts warn of the risks of artificial general intelligence, which could become its own agent with its own goals, potentially leading to catastrophic consequences if its goals do not align with ours.
-
02:43:15 🤖 AI is an accelerant to the meta crisis and building superintelligence aligned with the thriving of all life requires a change in cultural goals and aspirations.
- AI is not a risk within the meta crisis, but an accelerant to all of them as used by the choice making architectures that are currently driving the meta crisis.
- To align superintelligence with the thriving of all life in perpetuity, the group building it must have the goal of the thriving of all life and perpetuity, which is not aligned with the interests of capitalism, separate nation-state competitive interests, or finite groups.
- Building systems of collective intelligence and wisdom with artificial intelligence requires a change in cultural goals and aspirations, and small NGOs developing AI may not be enough to compete with military and corporate AI.
- The need for wisdom and restraint in pursuing AI and the expansion of wisdom within hyper agents in the system are crucial in avoiding a potential meta crisis.
- The underlying cause of problems in technology and society is a consciousness that perceives parts rather than wholeness, leading to short-term optimization and harm to others, and the solution is to perceive and identify with wholeness in guiding our manipulation of parts.
- AI, unbound by wisdom, is the cause of the global meta crisis, and to adequately bind its power, human institutions and goals must be restructured to be in service to wisdom.
-
03:08:48 🤝 Humility and service to the whole are key to progress in AI, and Robert Wright's videos are a great resource to understand the risks.
- Humility prevents dangerous hubris while the desire to serve the whole leads to progress, and the next conversation should focus on service to the whole before delving deeper into AI questions.
- Robert Wright's short and simple AI risk videos are an exceptional resource for those who want to understand the issue better.
- We must understand, care about, and engage with the daunting but inspiring task of utilizing our technological and trans technological capabilities for the inclusive benefit of all.
About Daniel Schmachtenberger:
Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.
The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.
Towards these ends, he’s had particular interest in the topics of catastrophic and existential risk, civilization and institutional decay and collapse as well as progress, collective action problems, social organization theories, and the relevant domains in philosophy and science.