Artificial Cognition RSS

Artificial Cognition, Chain-of-Thought Prompting, MLLM -

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter...

Read more

Artificial Cognition, Deep Thought, MLLM, Multimodal Large Language Model -

Today's large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these networks are -- or will soon become -- "thinking machines", capable of performing tasks that require abstract knowledge and reasoning. Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. Drawing on evidence from cognitive neuroscience,...

Read more

Artificial Cognition -

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. Sound complicated? Master Inventor Martin Keen gives you a simple (and fun) explanation on how explainable AI works.

Read more

Artificial Cognition, Psychology -

Abstract Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this paper, we describe how cognitive psychologists can make contributions to XAI. The human mind is also a black box, and cognitive psychologists have over 150 years of experience modeling it through experimentation....

Read more

AI, Artificial Cognition -

What does the Theory of Mind breakthrough discovered in GPT 4 mean for the future of our interactions with language models? How might this complicate our ability to test for AI consciousness? I show the weaknesses of a range of tests of consciousness, and how GPT 4 passes them. I then show how tests like these, and other developments, have led to a difference of opinion at the top of OpenAI on the question of sentience. I bring numerous academic papers and David Chalmers, an eminent thinker on the hard problem of consciousness, and touch on ARC post yesterday on...

Read more

Tags
#WebChat .container iframe{ width: 100%; height: 100vh; }