AGI RSS

AGI, AI, AI Models, GPT-4, Tree of Thoughts -

A new A.I. paper is released: “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” It’s by researchers at Princeton University and Google DeepMind. It's shows how increase the ability for GPT-4 to autonomously solve complex problems... but it comes with a warning.         Paper: https://arxiv.org/abs/2305.10601 PDF: https://arxiv.org/pdf/2305.10601.pdf    

Read more

AGI, AI Ethics, Governance, Superintelligence -

Two documents released in the last few hours and days, OpenAI’s ‘Governance of Superintelligence’ and DeepMind’s ‘Model Evaluation for Extreme Risks’, reveal that the top AGI labs are trying hard to think of how to live with, and govern, a superintelligence. I want to cover what they see coming. I’ll show you persuasive evidence that the GPT 4 model has been altered and now gives different outputs from two weeks ago. And I’ll look at the new Tree of Thoughts and CRITIC prompting systems that might constitute ‘novel prompt engineering’. I’ll also touch on the differences among the AGI lab...

Read more

AGI, AI, GPT-4 -

 In this video, I will not only show you how to get smarter results from GPT 4 yourself, I will also showcase SmartGPT, a system which I believe, with evidence, might help beat MMLU state of the art benchmarks. This should serve as your ultimate guide for boosting the automatic technical performance of GPT 4, without even needing few shot exemplars. The video will cover papers published in the last 72 hours, like Automatically Discovered Chain of Thought, which beats even 'Let's think Step by Step' and the approach that combines it all. Yes, the video also touches on the...

Read more

AGI, AI, Theory of Mind -

Boosting Theory-of-Mind Performance in Large Language Models via Prompting Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs...

Read more

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }