“What's wrong with LLMs and what we should be building instead” - Tom Dietterich - #VSCF2023
https://i.ytimg.com/vi/cEyHsMzbZBs/maxresdefault.jpg
Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning.
He served as executive editor of the journal called Machine Learning (1992–98) and helped co-found the Journal of Machine Learning Research.
He is one of the members of our select valgrAI Scientific Council.
Keynote: “What's wrong with LLMs and what we should be building instead”
Abstract: Large Language Models provide a pre-trained foundation for training many interesting AI systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. This talk will review these shortcomdifferentings and current efforts to address them within the existing LLM framework. It will then argue for a , more modular architecture that decomposes the functions of existing LLMs and adds several additional components. We believe this alternative can address all of the shortcomings of LLMs. We will speculate about how this modular architecture could be built through a combination of machine learning and engineering.
Timeline:
00:00-02:00 - Introducción
00:00-02:00 Introduction to large language models and their capabilities
02:01-3:14 Problems with large language models: Incorrect and contradictory answers
03:15-4:28 Problems with large language models: Dangerous and socially unacceptable answers
04:29-6:40 Problems with large language models: Expensive to train and lack of updateability
06:41-12:58 Problems with large language models: Lack of attribution and poor non-linguistic knowledge
12:59-15:02 Benefits and limitations of retrieval augmentation
15:03-15:59 Challenges of attribution and data poisoning
16:00-18:00 Strategies to improve consistency in model answers
18:01-21:00 Reducing dangerous and socially inappropriate outputs
21:01-25:26 Learning and applying non-linguistic knowledge
25:27-37:35 Building modular systems to integrate reasoning and planning
37:36-39:20 Large language models have surprising capabilities but lack knowledge bases.
39:21-40:47 Building modular systems that separate linguistic skill from world knowledge is important.
40:48-45:47 Questions and discussions on cognitive architectures and addressing the issue of miscalibration.
45:48 Overcoming flaws in large language models through prompting engineering and verification.
Follow us!
LinkedIn: https://www.linkedin.com/company/valgrai/
Instagram: https://www.instagram.com/valgrai/
Youtube: https://www.youtube.com/@valgrai/
Twitter: https://twitter.com/fvalgrai
“What's wrong with LLMs and what we should be building instead” - Tom Dietterich - #VSCF2023
-------------------------------------
0:49:47
2023-10-26T13:52:57Z
cEyHsMzbZBs
0 comments
Tags
- Agentic AI
- AGI
- AI
- AI Art
- AI Ethics
- AI Girlfriends
- AI Models
- AI Risk
- ai tools
- Alan D. Thompson
- Alexandr Wang
- Andrew Huberman
- Andrew Ng
- Artificial Cognition
- Aurora Supercomputer
- Authenticity
- Autism Spectrum
- AutoGPT
- Aza Raskin
- Azure Open AI
- Azure OpenAI Service
- Bias Compensation
- Bias Therapy
- Brian Roemmele
- Chain-of-Thought Prompting
- ChatGPT
- Christopher Rufo
- climate change
- Cognition Enhancement
- Cognitive Bias
- Cognitive Content
- Cognitive Performance
- Collective Intelligence
- Collective Stupidity
- Communication
- Consciousness
- Cosmology
- Critical Race Theory
- Daniel Dennett
- Daniel Schmachtenberger
- David Shapiro
- Deep Thought
- Dennis Prager
- Digital Minds
- Digital Thoughts
- Diversity
- Dojo
- Douglas Murray
- Elon Musk
- Emad Mostaque
- Equity
- Eric Weinstein
- Ethical Community Development
- Ethics
- Everyman
- Exponential Enterprise
- Fei-Fei Li
- Foresight
- Fred Lerdahl
- Frontiers Forum
- Futurecrafting
- Futurework
- Gary Marcus
- Gemini
- Gender
- Gender Pronouns
- Generative AI
- Generative Theory of Tonal Music (GTTM)
- Geoffrey Hinton
- Geoffrey Miller
- Glenn Loury
- Governance
- GPGICs
- GPT-4
- GPT-5
- Higher Education
- Human Potential
- Humanities
- Identity
- Ilya Sutskever
- Implicit Association Tests
- Intel
- Intelligence
- James Lindsay
- Joe Rogan
- Jordan B Peterson
- Jungian Archetypes
- Konstantin Kisin
- Language
- Lex Fridman
- Libra
- Life Coaching
- Liv Boeree
- Male Loneliness
- Marcus Aurelius
- Marcus T. Anthony
- Matt Walsh
- Matthew Berman
- Max Tegmark
- MemoryGPT
- Mental Health
- metabotropic receptors (MRs)
- Metacrisis
- Michio Kaku
- Microsoft AI
- Microsoft Copilot
- Microsoft Jarvis
- Microsoft Open AI
- Microsoft Semantic Kernel
- Millennials
- Mind Reading
- Minecraft
- Mirella Lapata
- MIT
- MLLM
- Moha Bensofia
- Morality
- Multimodal Large Language Model
- Multiversal Stories
- Music
- Narcissism
- Neurodivergence
- Neuroplasticity
- Neuroscience
- Nvidia
- OpenAI
- optical computers
- Personal Development
- Peter Bannon
- Peter H. Diamandis
- Philosophy
- pinecone
- Psychology
- Ramani Durvasula
- Ray Jackendoff
- Ray Kurzweil
- Reflection
- Reid Hoffman
- Relationships
- Religion
- Richard Haier
- Robotic Process Automation (RPA)
- robotics
- Sabine Hossenfelder
- Sam Altman
- Sam Harris
- Sebastien Bubeck
- semantic search
- Seneca
- Simulation
- Singularity Ready
- Stephen Fry
- String theory
- Stupidity
- Super Alignment
- Superintelligence
- Susan Blackmore
- Synthetic Intelligence
- Synthetic Mind
- Technology
- Terence McKenna
- Tesla
- Tesla AI
- The Hero Archetype
- Theism
- Theory of Mind
- Thomas Sowell
- Thought
- Thought Experiments
- Transactivism
- transcendence
- Translation
- Tree of Thoughts
- Tristan Harris
- Turing Lectures
- Unconscious Bias Training
- Victor Davis Hanson
- Wes Roth
- Will Caster
- Woke Ideologies
- Worker Productivity
- Worker Satisfaction
- Yann LeCun
- Yuval Noah Harari