Home » Sciences » Stop Taking Moral Advice from ChatGPT: Why It’s a Bad Idea!

Stop Taking Moral Advice from ChatGPT: Why It’s a Bad Idea!

Photo of author

By Cameron Aldridge

Stop Taking Moral Advice from ChatGPT: Why It’s a Bad Idea!

Photo of author

By Cameron Aldridge

Should you inform a friend if their partner is unfaithful? What about speaking up during an inappropriate joke? When we encounter ethical dilemmas—choices that test our notions of right and wrong—we often seek guidance. Nowadays, individuals can also turn to ChatGPT and other advanced language models for help with these issues.

Many users find the responses from these models satisfactory. A recent study revealed that responses from large language models (LLMs) to ethical dilemmas were considered more trustworthy, reliable, and nuanced than those given by Kwame Anthony Appiah, an ethicist columnist for the New York Times.

Supporting Quality Science Journalism

If you appreciate this article, please consider supporting our high-quality journalism by subscribing. Your subscription helps sustain future reporting on the groundbreaking discoveries and ideas that shape our world.

Further research supports the notion that LLMs can provide valuable ethical advice. One study published last April concluded that an AI’s ethical reasoning was deemed “superior” in terms of virtue, intelligence, and trustworthiness compared to humans. Some scholars even propose that LLMs can be programmed to provide ethical advice on financial matters, despite their lack of inherent moral compass.

These findings suggest that excellent ethical advice is readily accessible through LLMs. However, this conclusion is based on several questionable assumptions. Studies indicate that people often fail to recognize valuable advice. Moreover, while many emphasize the content of advice, the social context in which it is given plays a crucial role, particularly in moral situations.

A 2023 research paper reviewed multiple studies to determine what makes advice persuasive. It was found that the more knowledgeable an advisor was perceived to be, the more likely their advice was followed. However, perceived expertise does not always correspond to actual knowledge. Furthermore, being an expert doesn’t necessarily translate into giving effective advice. In experiments where participants learned a new game, advice from top players did not lead to better performance than guidance from less skilled players. Successful individuals may not fully understand or articulate how they achieve their success, making it difficult to pass on effective strategies.

See also  Breaking: Record-Breaking Neutrino Detected in the Mediterranean!

Another study involving undergraduates in speed-dating scenarios showed that subjective experiences provided by others were more helpful than objective profiles in predicting the outcomes of their dates. This suggests that factual information isn’t always the most informative.

While ChatGPT lacks personal experiences, even if it could offer high-quality advice, there are additional social benefits it cannot replicate. Seeking moral advice often involves sharing personal issues, where the intimacy gained is sometimes more valued than the advice itself. Discussing personal matters can quickly foster closeness, as both parties engage in self-disclosure and seek a common understanding of their internal states—emotions, beliefs, and concerns.

Of course, some might prefer to avoid social interactions, fearing awkwardness or burdening others. Yet, studies consistently show that people underestimate the enjoyment and value of both casual and deep conversations with friends.

With moral advice, one must be particularly cautious—it often seems more like an objective truth rather than a subjective opinion. For example, the ethical stance that “stealing is bad” appears more absolute than a preference for salt and vinegar potato chips. Thus, advice loaded with moral reasoning can be particularly persuasive, and it’s wise to critically evaluate such guidance, whether it comes from an AI or a human.

Sometimes, the best approach to morally charged issues is to reframe the debate. My previous research indicates that when people view issues like risky sexual behavior, smoking, or gun ownership through a moral lens, they are less likely to support harm-reduction policies because these policies still permit the behaviors. However, concerns about harm reduction are less prominent in morally neutral issues like wearing seat belts or helmets. Shifting from a moral to a practical perspective is challenging, and likely beyond the current capabilities of LLMs.

See also  Inside Scoop: NSF Scours Research Grants for Trump Order Violations!

Furthermore, LLMs are highly sensitive to the phrasing of questions, as demonstrated by a 2023 study showing that their moral advice can vary significantly based on the wording of queries. The ease with which these models’ responses can be influenced should give us pause. Curiously, although participants in the study did not believe the AI’s advice influenced their decisions, those who received AI-generated advice tended to act more in line with it than those who did not.

In dealing with LLMs, caution is advisable. We are not always good at identifying competent advisors or valuable advice, especially concerning ethical issues. Often, we need genuine social interaction, validation, and challenges more than an “expert” response. While you might consult an LLM, don’t rely solely on it—seek out a friend’s perspective as well.

*Are you a scientist specializing in neuroscience, cognitive science, or psychology, and have you recently read a peer-reviewed paper that you’d like to discuss? Please send your suggestions to Scientific American’s Mind Matters editor, Daisy Yuhas, at dyuhas@sciam.com.*

*This article is an opinion and analysis piece, and the views expressed by the author or authors do not necessarily reflect those of Scientific American.*

Similar Posts

Rate this post
Share this :

Leave a Comment