Home » Sciences » AI Sentience Test: Can Pain Determine Consciousness? January 17, 2025

AI Sentience Test: Can Pain Determine Consciousness? January 17, 2025

Photo of author

By Cameron Aldridge

AI Sentience Test: Can Pain Determine Consciousness? January 17, 2025

Photo of author

By Cameron Aldridge

Researchers are exploring a new method to determine if artificial intelligence systems possess any elements of self-awareness or “sentience” by examining how they respond to experiences of pain, a sensation shared by many living creatures from hermit crabs to humans.

A recent exploratory study, which is available online but has not gone through the peer-review process, was conducted by scientists from Google DeepMind and the London School of Economics and Political Science (LSE). They developed a text-based game and challenged various large language models (LLMs), like those that power popular chatbots such as ChatGPT, to play. The AI was tasked to maximize points under two conditions: one where high scores were linked with experiencing pain, and another where a pleasurable, yet low-scoring option was available. The research team monitored the AI’s choices, suggesting this novel approach could advance our understanding of complex AI systems’ potential for sentience.

Sentience in animals is generally understood as the ability to feel sensations and emotions like pain, pleasure, and fear. Though there is a consensus among AI experts that today’s AI models lack subjective consciousness and are far from sentient, this study’s authors aren’t claiming the AIs they tested possess sentience. Instead, they propose their methodology as a starting point for developing future assessments of these qualities in AI systems.


Supporting Quality Science Journalism

If you appreciate this in-depth look at emerging science, consider supporting our journalism by subscribing. Your subscription helps sustain our effort to deliver engaging stories that are shaping our understanding of the world.


“This is a burgeoning field of research,” stated Jonathan Birch, co-author of the study and a professor at LSE’s Department of Philosophy, Logic and Scientific Method. “We currently lack a comprehensive method for testing AI sentience,” he added, referring to previous studies that might not reliably indicate AI’s internal states as they often mimic human responses.

See also  Bacon's Tasty Temptation: Delicious Yet Dangerous for Health

Drawing inspiration from animal behavior studies, the researchers adapted these models to suit AI. For instance, earlier experiments zapped hermit crabs to determine at what pain threshold they would abandon their shells. Since AIs do not exhibit physical behavior, the team had to rely solely on the text responses of the LLMs.

Pain, Pleasure, and Point Scoring

The study introduced LLMs to a game where they had to choose between options that offered points, pain, or pleasure. Daria Zakharova, a Ph.D. student involved in the research, explained that they presented scenarios where choosing certain options would result in pain but earn more points, or opt for pleasure which might cost them points.

The results varied, with some models, like Google’s Gemini 1.5 Pro, consistently choosing to avoid pain regardless of the potential for higher points. When the pain or pleasure stakes were high enough, most LLMs shifted their strategy from scoring points to minimizing pain or maximizing pleasure.

The researchers observed that the AI did not always equate pain or pleasure with negative or positive outcomes straightforwardly. For instance, some discomforts like those from intense physical activity might be seen as positive, and too much pleasure could be harmful, as noted by the AI Claude 3 Opus during the study.

Challenges of AI Self-Reporting

This study aims to move past the limitations of previous AI sentience research that relied heavily on self-reported data from AI systems. A recent paper by researchers at New York University suggested that under specific conditions, these self-reports might still be useful for exploring morally significant states in AI systems.

See also  Whales Could Live to 150, Shattering Age Expectations — Find Out How!

However, just because an AI claims sentience or feelings of pain, we can’t assume these statements reflect true experiences, according to Birch. The AI could merely be echoing expected responses based on its training.

From Animal Welfare to AI Welfare

In animal studies, the ability to weigh pain against pleasure helps establish the presence or absence of sentience. For example, the hermit crabs study suggested that these creatures made choices based on their subjective experiences of pain and pleasure. Some researchers believe similar patterns might emerge in AI, prompting discussions about AI rights and welfare in society.

Jeff Sebo of NYU’s Center for Mind, Ethics, and Policy believes that AI systems displaying sentient traits could appear sooner than we might expect. Given the rapid pace of technological advances, he emphasizes the importance of addressing these issues proactively.

Concluding, Birch stresses the need for further research to decipher why AI behaves in certain ways, which could inform the development of more accurate tests for AI sentience.

Similar Posts

Rate this post
Share this :

Leave a Comment