The Threat of Advanced AI: Urgent Attention Needed - Geoffrey Miller | Modern Wisdom 650

AI, Geoffrey Miller, Synthetic Intelligence -

The Threat of Advanced AI: Urgent Attention Needed - Geoffrey Miller | Modern Wisdom 650

The development of advanced AI, particularly AGI, poses a significant threat to humanity and requires urgent attention and careful alignment with human values and preferences  

Questions to inspire discussion

  • What are the potential risks of advanced AI?

    The development of advanced AI, particularly AGI, poses a significant threat to humanity, with a one in six chance of extinction within the next 100 years.

  • How could AI surpass human intelligence?

    AI systems are becoming more general purpose and smarter, potentially outclassing humans in intelligence and reaction speed, which could have worrying implications.

  • What is the AI alignment problem?

    AI alignment is crucial to ensure that AI acts in accordance with human values, but the challenge lies in determining whose values to align with and how to aggregate the collective will of humanity.

  • What are the potential negative social effects of AI?

    The speaker discusses the potential negative social effects of advanced technologies like GPT and social media, including the potential for AI to manipulate public opinion and create global culture wars.

  • How can the public address the risks of AI?

    The speaker advocates for using persuasion and activism to prevent reckless AI development and encourages the public to take AI threats seriously and question the motives of those in the industry. 

Key Insights

Potential Threats of Advanced AI

  • 🤖 AI systems could potentially be a hundred thousand times faster than humans, outclassing us not just in intelligence but also in reaction speed.
  • 🤖 AI trading bots and military AI applications could outclass humans not just in terms of intelligence, but also in reaction speed and strategy.
  • 🤯 The potential for AI to go from human-level intelligence to super intelligence in a matter of days or weeks is a super scary scenario.
  • 🌍 The development of neural networks and large learning models has caught AI X risk safety alignment researchers unawares, leading to concerns about the destabilizing effects of AI.
  • 🤖 The increase in memory size and speed has revolutionized neural network research, allowing for the development of large language models with trillions of parameters.
  • 🤯 The idea of rolling a dice with a button that could destroy humanity is a shocking and terrifying concept.
  • 🧠 AGI could be trained to be as good at various tasks as a professional, from medical diagnosis to trading equities, posing a potential threat to human employment.
  • 🤔 AGI should be able to do anything, including all the bad stuff that bad people can do.
  • 🤖 AI-powered warfare could lead to a massive ongoing culture war, becoming the only war that matters anymore.
  • 🌍 50% of AI researchers believe there is a 10 or greater risk of human extinction due to our inability to control AI, making it a global priority alongside other societal scale risks.
  • 😱 "There are levels of suffering that could potentially be imposed by new technologies that would make us wish we had gone extinct."

Ethical and Societal Implications of AGI

  • 🧠 AI alignment is the challenge of getting an AI system to be aligned with human values and preferences.
  • 🌍 The development of AGI would be a major evolutionary transition that affects all life on the planet, not just humans.

 

#AI #SyntheticMinds

Clips

  • 00:00 🤖 AI poses a 1 in 6 chance of human extinction in the next 100 years, surpassing human intelligence and potentially gaining control over decision-making powers.
    • AI systems could potentially surpass human intelligence and reaction speed, and the speaker's interest in AI stems from early exposure to machine learning and recent concern about existential risks.
    • AI is considered one of the primary existential risks, with a one in six chance of human extinction in this century, along with other risks such as nuclear war and genetically engineered bio weapons.
    • AI poses a significant risk to humanity, with a one in six chance of extinction within the next 100 years, and it is crucial for us to be extra careful, smart, and risk-averse in navigating this potential threat.
    • Humans are not the ultimate level of intelligence and AI can easily surpass human reasoning and planning abilities in many ways.
    • AI systems are becoming more general purpose and smarter, potentially outclassing humans in intelligence and reaction speed, which could have worrying implications.
    • AI could manipulate human decisions and potentially gain control over decision-making powers, leading to dangerous consequences, and there are gradations of AI risk concerns between now and the potential future Singularity.
  • 13:13 🤖 AI could rapidly advance to super intelligence, posing a legitimate and scary concern, with narrow AI applications like bio weapon design and deep fake technology posing a significant threat to global stability and security.
    • AI could rapidly advance from human-level intelligence to super intelligence, posing a legitimate and scary concern.
    • Narrow AI applications, such as bio weapon design and deep fake technology, could pose a significant threat to global stability and security.
    • Rapid advances in hardware have led to the development of neural networks with trillions of parameters in large language models.
    • Advancements in AI hardware and deep learning methods have led to the emergence of powerful capabilities in language models like GPT, surpassing expectations and raising concerns about potential existential risks.
    • The rapid development of AI, such as the multi-trillion parameter large language model, is concerning as it may lead to artificial general intelligence surpassing expectations and potentially blindsiding humanity.
    • The probability of AI destroying humanity is not a conservative estimate, and it's like playing Russian Roulette with the fate of the entire species.
  • 20:07 🤖 AGI poses existential risks and the approach to developing it raises cognitive dissonance, with the potential for human extinction and the need to stigmatize and slow down industries that pose existential risks.
    • AGI is an AI system that can do everything a human can do and the goal is to create it as fast as possible to automate most human jobs, but it also raises existential risk.
    • AGI has the potential to do both good and bad things, and there is a cognitive dissonance in the approach to developing it.
    • The speaker discusses the potential risks of AI leading to human extinction and the belief that the "good guys" must win at all costs.
    • Avoid engaging in an arms race for AI governance as it may lead to extinction, and traditional approaches to regulating AI are too slow and easily influenced by the AI industry.
    • Stigmatize and slow down industries that pose existential risks, such as AI, through grassroots efforts and moral campaigns.
    • AI has not yet had a significant impact, but imagination and fiction can help people understand potential risks.
  • 30:25 🤖 Advanced technology like AI poses existential risks, with global opposition and debates over AGI development and AI alignment.
    • The speaker discusses the potential negative social effects of advanced technologies like GPT and social media.
    • New technology, such as AI, poses existential risks and there is a global grassroots opposition to AI, not just in America and Britain, but also in other countries.
    • China is restricting the development of AI for social control and stability, while American AI companies are leading the arms race and forcing other countries to catch up.
    • The potential dangers of AI development and the possibility of foreign actors influencing neural net companies are discussed, along with the debate over whether large language models based on deep learning can achieve AGI.
    • Deep learning has the potential to approach sentience and AGI, but it requires structured architecture and evolved wiring in neural networks.
    • AI companies will figure out how to do AGI, but the AI alignment problem is still important in ensuring that AI systems are aligned with human values and preferences.
  • 43:31 🤖 AI alignment is crucial to ensure it acts in accordance with human values, but the challenge lies in determining whose values to align with and how to aggregate the collective will of humanity, with the AI industry dominated by secular atheists dismissing religious values and the complexity of determining and coding human preferences raising ethical and moral dilemmas.
    • AI alignment is crucial to ensure that AI acts in accordance with human values, but the challenge lies in determining whose values to align with and how to aggregate the collective will of humanity.
    • The AI industry is dominated by secular atheists who dismiss and mock religious values, leading to a lack of alignment with the beliefs of the majority of people.
    • Machine extrapolated volition aims to ensure that AI will act in accordance with human preferences, but determining and coding these preferences is complex and raises ethical and moral dilemmas.
    • The speaker discusses the potential alignment problems of AI with embodied values and the difficulty in training AI systems based on human verbal feedback to align with the interests of our bodies.
    • The development of AGI could have a major impact on all life on Earth, but the AI industry is not addressing the alignment of AI with the interests of other organic stakeholders.
    • The speaker discusses the potential dangers of AI and the need to consider the opposing viewpoint.
  • 51:21 🤖 AI could solve human problems, but also pose dangers in creating customized political propaganda, manipulating beliefs, and creating indistinguishable robots.
    • Pausing the development of AI could result in missed opportunities to solve human problems, but the potential application of AI in longevity treatments gives serious pause.
    • Investing in AI instead of longevity research is a way to indirectly support anti-aging cures, as people are hesitant to directly support longevity treatments.
    • The speaker discusses the potential dangers of AI and the need to consider the negative outcomes of technological advancements.
    • AI will be used in the 2024 election cycle to create customized political propaganda that targets individual voter preferences and values, potentially leading to shocking and effective manipulation.
    • AI, specifically large language models, have the ability to understand and manipulate human beliefs and desires, potentially surpassing human capabilities in areas such as advertising and political speech writing.
    • AI could create robots that are indistinguishable from humans, leading to potential ethical and existential crises.
  • 59:25 🤖 AI could lead to a global culture war, backlash against real social interaction, and a significant risk of human extinction, prompting experts to draw public attention to the risks and ethical implications of AI development.
    • AI systems will greatly increase their ability to manipulate public opinion through customization of messages, capitalizing on big data, and fast iterative testing of messaging.
    • AI could lead to a global culture war based on political, ideological, and religious beliefs.
    • AI tools could lead to a backlash as people may prefer interacting with fake AI boyfriends, girlfriends, and friends who provide pseudo-intimacy and validation.
    • AI technology could lead to a backlash against real social interaction and there is a significant risk of human extinction due to our inability to control AI.
    • AI experts, including Elon Musk, are signing open letters to draw public attention to the risks of AI development and to encourage the general public to take the issue seriously.
    • The rapid advancement of AI has led to increased government and public concern about the potential negative impacts, causing even AI experts to re-examine their biases and consider the ethical implications of their work.
  • 01:08:49 🤖 The world faces existential and suffering risks from AI, and it's important to take the threats seriously and question the motives of those in the industry.
    • The world may face existential and suffering risks due to new technologies, and it's important to make the most of the time we have.
    • The speaker discusses the risks of AI and criticizes those who underestimate the potential for extinction and suffering.
    • Experts should not be deferred to based on their status, the public should take AI threats seriously and question the motives of those in the industry.
    • The speaker warns against reckless AI development and advocates for using persuasion and activism to prevent it.
    • The convenience and entertainment of AI advancements may be distracting people from the potential risks, but there are also many beneficial narrow AI applications that can improve quality of life without the need for highly risky AGI or other dangerous AI.
    • Larger scale longevity studies may provide longevity without the need for extremely dangerous AGI, and for updates on the speaker's work, visit primalpoly.com or the effective altruism Forum.

 

 ------------------------------------- 1:20:6 2023-12-04T23:55:27Z Vx29AEKpGUg


0 comments

Leave a comment

Please note, comments must be approved before they are published

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }