What does the Theory of Mind breakthrough discovered in GPT 4 mean for the future of our interactions with language models?
How might this complicate our ability to test for AI consciousness?
I show the weaknesses of a range of tests of consciousness, and how GPT 4 passes them.
I then show how tests like these, and other developments, have led to a difference of opinion at the top of OpenAI on the question of sentience.
I bring numerous academic papers and David Chalmers, an eminent thinker on the hard problem of consciousness, and touch on ARC post yesterday on how they conducted safety evaluations and the urgency of the moment.