Synthetic Intelligence RSS

AI, Synthetic Intelligence -

Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of the journal called Machine Learning (1992–98) and helped co-found the Journal of Machine Learning Research. He is one of the members of our select valgrAI Scientific Council. Keynote: “What's wrong with LLMs and what we should be building instead” Abstract: Large Language Models provide a pre-trained foundation for training many interesting AI systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. This talk will review these shortcomdifferentings and current efforts to address them within the existing LLM framework. It will then argue for a , more modular architecture that decomposes the functions of existing LLMs and adds several additional components. We believe this alternative can address all of the shortcomings of LLMs. We will speculate about how this modular architecture could be built through a combination of machine learning and engineering. Timeline: 00:00-02:00 - Introducción 00:00-02:00 Introduction to large language models and their capabilities 02:01-3:14 Problems with large language models: Incorrect and contradictory answers 03:15-4:28 Problems with large language models: Dangerous and socially unacceptable answers 04:29-6:40 Problems with large language models: Expensive to train and lack of updateability 06:41-12:58 Problems with large language models: Lack of attribution and poor non-linguistic knowledge 12:59-15:02 Benefits and limitations of retrieval augmentation 15:03-15:59 Challenges of attribution and data poisoning 16:00-18:00 Strategies to improve consistency in model answers 18:01-21:00 Reducing dangerous and socially inappropriate outputs 21:01-25:26 Learning and applying non-linguistic knowledge 25:27-37:35 Building modular systems to integrate reasoning and planning 37:36-39:20 Large language models have surprising capabilities but lack knowledge bases. 39:21-40:47 Building modular systems that separate linguistic skill from world knowledge is important. 40:48-45:47 Questions and discussions on cognitive architectures and addressing the issue of miscalibration. 45:48 Overcoming flaws in large language models through prompting engineering and verification. Follow us! LinkedIn: https://www.linkedin.com/company/valgrai/ Instagram: https://www.instagram.com/valgrai/ Youtube: https://www.youtube.com/@valgrai/ Twitter: https://twitter.com/fvalgrai

Read more

AI, Synthetic Intelligence -

In this AutoGen tutorial for beginners, you'll learn how to build a team of autonomous AI Agents powered by OpenAI's GPT-4. AutoGen is the cutting-edge framework for creating AI multi-agent Assistants, leaving behind its competitors such as MetaGPT or ChatDev. 🤝 Connect with me 🤝 LinkedIn: https://www.linkedin.com/in/kris-ograbek/ Medium: https://medium.com/@kris-ograbek +++ Useful Resources +++ Code: https://colab.research.google.com/drive/11HiXpnPNIN3WIJK76TG-tsraix_lhb0M?usp=sharing +++ Sources for Autogen +++ Docs: https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat GitHub: https://github.com/microsoft/autogen/tree/main Official Paper: https://arxiv.org/abs/2308.08155 Multi-agent Conversation Framework: https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat/ SDK: https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent/ Chapters: 0:00 Intro 1:24 Feature 1: Complete flexibility 2:39 Feature 2: Human participation 4:04 Feature 3: Multi-agent conversations 5:06 Feature 4: Flexible autonomy 5:50 User Proxy Agent autonomy explained 6:56 Assistant Agents explained 7:21 Group Chat Managers explained 9:06 AutoGen example project in Colab 11:21 GPT-4 prices :( 12:33 User Proxy Agent creation 13:40 Analyzing AutoGen results 16:35 AutoGen fixes the first bug 17:22 AutoGen fixes the second bug 18:12 Excitement about the results 20:45 Followup the conversation with User Proxy 24:15 AutoGen on your computer (+ function calling)

Read more

AI, Synthetic Intelligence -

Get on my daily AI newsletter 🔥 https://natural20.beehiiv.com/subscribe [News, Research and Tutorials on AI] See more at: https://natural20.com/ My AI Playlist: https://www.youtube.com/playlist?list=PLb1th0f6y4XROkUAwkYhcHb7OY9yoGGZH

Read more

AI, Synthetic Intelligence -

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.me/papers DALL-E 3 in Bing image creator: https://www.bing.com/images/create Or try it in Skype - for me, a contact named Bing appears in the user list and you can tell it to "make an image of (your prompt)" https://www.skype.com/en/ My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 Sources: Laughter: https://twitter.com/Randomized_AI/status/1709342476236902586/photo/1 Proverbs: https://www.engvid.com/english-resource/50-common-proverbs-sayings/ Sketch: https://www.reddit.com/r/ChatGPT/comments/16xc46l/so_i_was_messing_around_with_dalle_3_and_got_this/ Parrot: https://twitter.com/BjoPhoto777/status/1711705730598777264 Consistency: https://twitter.com/anukaakash/status/1710844686729114102 Consistency prompt: “create images of same four people in four different settings, create all images in same realistic photography style: a dad, mum and their two little boys, in park, in the car, in the beach, in the garden” Aging: https://twitter.com/anukaakash/status/1709399920493617614 Painting: https://twitter.com/MaxZiebell/status/1707930920819261910 Moon base: https://twitter.com/Rahat_RF1/status/1711622331632849394 Digital art: https://twitter.com/OrctonAI/status/1710688047350546857 Really good!: https://twitter.com/skirano/status/1707915863221817787 Character consistency guide: https://semicolon.dev/midjourney/how-to-make-consistent-characters 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Gaston Ingaramo, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's research works: https://cg.tuwien.ac.at/~zsolnai/

Read more

AI, Synthetic Intelligence -

GPT-4 Caught LYING, Meta's INSANE New AI, No AI Safety? And MORE!! (#AINEWS 18) Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos. Was there anything we missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience #IntelligentSystems #Automation #TechInnovation

Read more

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }