How Neural Networks Learned to Talk | ChatGPT: A 30 Year History

AI, Synthetic Intelligence -

How Neural Networks Learned to Talk | ChatGPT: A 30 Year History

This video explores the journey of language models, from their modest beginnings through the development of OpenAI's GPT models & hints at Q*. Our journey takes us through the key moments in neural network research involved in next word prediction. We delve into the early experiments with tiny language models in the 1980s, highlighting significant contributions by researchers like Jordan, who introduced Recurrent Neural Networks, and Elman, whose work on learning word boundaries revolutionized our understanding of language processing. Featuring Noam Chomsky Douglas Hofstadter Michael I. Jordan Jeffrey Elman Geoffrey Hinton Ilya Sutskever Andrej Karpathy Yann LeCun and more. This is the last video in the series "The Pattern Machine" you can watch it all here: https://www.youtube.com/playlist?list=PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ 00:00 - Introduction 00:32 - hofstader's thoughts on chatGPT 01:00 - recap of supervised learning 01:55 - first paper on sequential learning 02:55 - first use of state units (RNN) 04:33 - first observation of word boundary detection 05:30 - first observation of word clustering 07:16 - first "large" language model Hinton/Sutskever 10:10 - sentiment neuron (Ilya & Hinton) 12:30 - transformer explaination 15:50 - GPT-1 17:00 - GPT-2 17:55 - GPT-3 18:20 - In-context learning 19:40 - ChatGPT 21:10 - tool use 23:25 - philosophical question: what is thought? How Neural Networks Learned to Talk | ChatGPT: A 30 Year History Questions to inspire discussion Key Insights #AI #SyntheticMinds
Timestamped Summary ------------------------------------- 0:26:55 2023-11-29T15:33:41Z OFS90-FX6pg

0 comments

Leave a comment

Please note, comments must be approved before they are published

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }