AI RSS
Leak shows human-level AI achieved. Here's the evidence. + Tesla bot, DALL·E 3, Pi.
AI's stunning new skills. To learn AI, visit: https://brilliant.org/digitalengine where you'll also find loads of fun courses on maths, science and computer science.
AI robots, with Max Tegmark, Dario Amodei, Emad Mostaque, Tesla bot, Ameca, Digit, Pi AI, GPT-4.
Thanks to Brilliant for sponsoring this video.
Theory of mind may have spontaneously emerged in large language models.
https://www.gsb.stanford.edu/faculty-research/working-papers/theory-mind-may-have-spontaneously-emerged-large-language-models
Letter signed by 1500 professors (and thousands of other experts)
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Statement on the risk from the leaders of AI firms:
https://www.safe.ai/statement-on-ai-risk
1.5m people take Turing test.
https://arxiv.org/abs/2305.20010
RT-2: New model translates vision and language into action
https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action?utm_source=twitter&utm_medium=social&utm_campaign=rt2
Embodied AI: Bridging the Gap to Human-Like Cognition
https://www.humanbrainproject.eu/en/follow-hbp/news/2023/08/09/embodied-ai-bridging-gap-human-cognition/#:~:text=Our%20brain%20has%20evolved%20through,connection%20to%20the%20physical%20world.
AI and robots help understand animal language
https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/
Karen Bakker: Could an orca give a TED Talk?
https://www.ted.com/talks/karen_bakker_could_an_orca_give_a_ted_talk?utm_source=rn-app-share&utm_medium=social&utm_campaign=tedspread
Synthesizing Physical Character-Scene Interactions (learning from simulations)
https://dl.acm.org/doi/abs/10.1145/3588432.3591525
Smarter people tend to have more advanced moral reasoning skills.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5167721/#:~:text=For%20example%2C%20Derryberry%20et%20al,predictor%20for%20the%20moral%20scores.
Why are smarter people more pro-social?
https://www.sciencedirect.com/science/article/abs/pii/S0160289618301466
IQ and society
https://blogs.scientificamerican.com/beautiful-minds/iq-and-society/#:~:text=IQ%20correlates%20positively%20with%20family,habits%2C%20illness%2C%20and%20morality.
DAWN OF LMMs 🔥 Microsoft puts GPT Vision to test... Final AI Agents Puzzle Piece?
Get on my daily AI newsletter 🔥
https://natural20.beehiiv.com/subscribe
[News, Research and Tutorials on AI]
See more at:
https://natural20.com/
The Paper:
https://arxiv.org/abs/2309.17421
My AI Playlist:
https://www.youtube.com/playlist?list=PLb1th0f6y4XROkUAwkYhcHb7OY9yoGGZH
[TIMELINE]
[00:00] Intro
[02:22] Abstract
[03:53] Accounting
[04:44] Attention to Detail
[06:23] Image Recognition Across Domains
[08:53] Medical Reasoning
[11:23] Making Coffee + Embodied Agents
[12:54] Industry, Manufacturing and QA
[17:11] Graphical User Interface Navigation
[26:24] Understanding Video, Emotions and Aethetics
[29:10] Analyzing Dash Cam Footage
[30:48] Improving AI Image Prompts
[32:42] Visual Poitnting
[37:51] Charts, Languages, Memes and Clues.
[51:23] Final Points
GPT Masterclass: 4 Years of Prompt Engineering in 16 Minutes
Medium article: https://medium.com/@dave-shap/become-a-gpt-prompt-maestro-943986a93b81
Slide Deck: https://github.com/daveshap/YouTube_Slide_Decks/blob/main/Business%20and%20Product/LLM%20Prompt%20Taxonomy.pdf
Large language models (LLMs) like GPT-4 have shown impressive abilities to generate humanlike text, have conversations, and demonstrate knowledge across many domains. However, there is still confusion around exactly how LLMs work and what capabilities they currently possess. This passage aims to provide a high-level taxonomy of LLM abilities and limitations.
LLMs are deep learning neural networks trained on massive text datasets to predict the next word in a sequence. This allows them to build complex statistical representations of language and accumulate world knowledge from their training data. LLMs have no explicit rules or knowledge - their capabilities emerge from recognizing patterns.
LLMs excel at reductive operations like summarization, distillation, and extraction which condense large inputs down by identifying salient information. Summarization produces concise overviews of documents. Distillation extracts key facts and principles. Extraction retrieves targeted information like names, dates, or figures.
Transformational techniques like paraphrasing, translation, and restructuring reshape text without losing meaning. Paraphrasing rewrites text with different words/phrasing while preserving meaning. Translation converts between languages. Restructuring improves logical flow and readability. Transformations leverage LLMs' understanding of linguistic conventions and narrative flow.
Generative tasks like drafting, planning, brainstorming, and amplifying synthesize new content from limited input. Drafting can expand prompts into coherent documents. Planning formulates step-by-step strategies to achieve goals based on parameters. Brainstorming produces creative possibilities from prompts. Amplification adds explanatory details to existing text. Generative abilities are more variable but rapidly improving.
Examined through Bloom's Taxonomy, LLMs exhibit skills from basic remembering of facts to highest-level creating original content. Their statistical learning acts as a knowledge repository to query. LLMs also demonstrate strong abilities in understanding concepts, applying knowledge, analyzing passages, and evaluating content. With the right prompting, they can create novel stories, articles, and dialogue.
LLMs have vast latent knowledge not contained in their explicit training. This includes memorized facts, general world knowledge, and learned cognitive skills for tasks like translation. Latent knowledge forms a dense reservoir that requires careful probing with prompts and techniques to extract. While promising, reliance on latent knowledge highlights LLMs' need to better index and activate their own internal knowledge.
Emergent capabilities like theory of mind, implied cognition, logical reasoning, and in-context learning have arisen from recognizing intricate patterns, not hardcoded rules. Theory of mind suggests models can distinguish their own and others' perspectives. Implied cognition points to dynamic reasoning when generating text. Logical reasoning abilities hint at inferring abstract principles from data. Rapid in-context learning demonstrates knowledge acquisition abilities.
Rather than a bug, LLMs' ability to fabricate plausible statements represents a core feature of intelligence. Humans also exhibit a spectrum from creativity to hallucination based on uncontrolled pattern generation. The ideal is not suppressing but responsibly directing generation. Research into alignment and ethics can allow beneficial creativity to flourish while minimizing harms. Maintaining factual grounding and conveying uncertainty are key precautions.
In summary, LLMs have diverse capabilities and limitations requiring continued research. With responsible development focused on augmenting human intelligence, LLMs offer exciting potential while managing risks. Their latent knowledge and emergent properties highlight promising directions to elevate reasoning, creativity, and understanding.
Addressing AI Risks: Global Governance & Ethical Impacts
Daniel Schmachtenberger is a social philosopher and founding member of The Consilience Project, talks about the metacrisis / nukes / Ai / consciousness.
The Impact of chatGPT talks (2023) - Prof. Max Tegmark (MIT)
Keeping AI under control through mechanistic interpretability The Impact of chatGPT and other large language models on physics research and education (2023) Event organizers: Kevin Burdge, Joshua Borrow, Mark Vogelsberger Session 1: The computer science underlying large language models
Tags
- All
- Agentic AI
- AGI
- AI
- AI Art
- AI Ethics
- AI Girlfriends
- AI Models
- AI Risk
- ai tools
- Alan D. Thompson
- Alexandr Wang
- Andrew Huberman
- Andrew Ng
- Artificial Cognition
- Aurora Supercomputer
- Authenticity
- Autism Spectrum
- AutoGPT
- Aza Raskin
- Azure Open AI
- Azure OpenAI Service
- Bias Compensation
- Bias Therapy
- Brian Roemmele
- Chain-of-Thought Prompting
- ChatGPT
- Christopher Rufo
- climate change
- Cognition Enhancement
- Cognitive Bias
- Cognitive Content
- Cognitive Performance
- Collective Intelligence
- Collective Stupidity
- Communication
- Consciousness
- Cosmology
- Critical Race Theory
- Daniel Dennett
- Daniel Schmachtenberger
- David Shapiro
- Deep Thought
- Dennis Prager
- Digital Minds
- Digital Thoughts
- Diversity
- Dojo
- Douglas Murray
- Elon Musk
- Emad Mostaque
- Equity
- Eric Weinstein
- Ethical Community Development
- Ethics
- Everyman
- Exponential Enterprise
- Fei-Fei Li
- Foresight
- Fred Lerdahl
- Frontiers Forum
- Futurecrafting
- Futurework
- Gary Marcus
- Gemini
- Gender
- Gender Pronouns
- Generative AI
- Generative Theory of Tonal Music (GTTM)
- Geoffrey Hinton
- Geoffrey Miller
- Glenn Loury
- Governance
- GPGICs
- GPT-4
- GPT-5
- Higher Education
- Human Potential
- Humanities
- Identity
- Ilya Sutskever
- Implicit Association Tests
- Intel
- Intelligence
- James Lindsay
- Joe Rogan
- Jordan B Peterson
- Jungian Archetypes
- Konstantin Kisin
- Language
- Lex Fridman
- Libra
- Life Coaching
- Liv Boeree
- Male Loneliness
- Marcus Aurelius
- Marcus T. Anthony
- Matt Walsh
- Matthew Berman
- Max Tegmark
- MemoryGPT
- Mental Health
- metabotropic receptors (MRs)
- Metacrisis
- Michio Kaku
- Microsoft AI
- Microsoft Copilot
- Microsoft Jarvis
- Microsoft Open AI
- Microsoft Semantic Kernel
- Millennials
- Mind Reading
- Minecraft
- Mirella Lapata
- MIT
- MLLM
- Moha Bensofia
- Morality
- Multimodal Large Language Model
- Multiversal Stories
- Music
- Narcissism
- Neurodivergence
- Neuroplasticity
- Neuroscience
- Nvidia
- OpenAI
- optical computers
- Personal Development
- Peter Bannon
- Peter H. Diamandis
- Philosophy
- pinecone
- Psychology
- Ramani Durvasula
- Ray Jackendoff
- Ray Kurzweil
- Reflection
- Reid Hoffman
- Relationships
- Religion
- Richard Haier
- Robotic Process Automation (RPA)
- robotics
- Sabine Hossenfelder
- Sam Altman
- Sam Harris
- Sebastien Bubeck
- semantic search
- Seneca
- Simulation
- Singularity Ready
- Stephen Fry
- String theory
- Stupidity
- Super Alignment
- Superintelligence
- Susan Blackmore
- Synthetic Intelligence
- Synthetic Mind
- Technology
- Terence McKenna
- Tesla
- Tesla AI
- The Hero Archetype
- Theism
- Theory of Mind
- Thomas Sowell
- Thought
- Thought Experiments
- Transactivism
- transcendence
- Translation
- Tree of Thoughts
- Tristan Harris
- Turing Lectures
- Unconscious Bias Training
- Victor Davis Hanson
- Wes Roth
- Will Caster
- Woke Ideologies
- Worker Productivity
- Worker Satisfaction
- Yann LeCun
- Yuval Noah Harari