Tool or conscious entity?: New study by UW professor explores surprising views on AI like ChatGPT

0

A recent study by UW assistant psychology professor Dr. Clara Colombatto and her colleague, Dr. Steve Fleming, a professor of cognitive neuroscience at University College London, reveals that two-thirds of people surveyed believe that artificial intelligence (AI) tools like ChatGPT possess some degree of consciousness, including feelings, thoughts, and experiences.

“A subset of the population, about one third, don’t attribute any consciousness to ChatGPT, but the rest, so 67 per cent, attribute some degree of consciousness,” Colombatto said. “So you have people who attribute a little bit of consciousness and then people who are willing to say that it’s as conscious as a human would be.”

These results were surprising to Colombatto and Fleming. “It’s a completely new way of thinking about human-AI interactions where it’s not even like Siri. It’s not a voice, it’s simply text on a screen,” said Colombatto. “It’s very surprising that people attribute consciousness to something that they have never seen, and for a cognitive scientist, it’s quite interesting.”

The study surveyed 300 U.S. participants, chosen to roughly match the U.S. population in terms of gender and age. The survey gathered information about participants’ attitudes toward AI, focusing on consciousness, subjective experiences of AI, other mental states, and familiarity with AI. This produced another intriguing finding. “We had initially thought that usage would predict lower consciousness attributions because if you use it frequently, you kind of see it as a tool,” Colombatto said. “But actually we found that usage is positively correlated with consciousness attribution. So the more familiar someone is and the more frequently they use it, the more likely they are to also attribute consciousness to it.”

This raises the question: Does using ChatGPT more frequently cause people to view it as more than just AI, or are those who believe ChatGPT is conscious more likely to use these tools? “We don’t really know the causal direction of this, so one interpretation is that using it more often leads you to think that it’s more conscious because of this more frequent interaction,” Colombatto explained. “Another option is that people who think it is conscious in the first place are the ones who are more likely to use it later. So we don’t really know the direction of this.”

This frequent interaction also brings in a key component of usage: trust. Innovators will want people to trust their product so that they will continue to use it. However, where is the line between trust and overreliance?

“I think that there’s some sort of conflict there between increasing uptake and ensuring that it’s well used and that the usage is regulated,” Colombatto said. “On the one hand, these tools can help us a lot in work and can make us more efficient… But also there’s this risk of overreliance. It cuts the thinking that the students will do because tragically is doing it for them in, in some cases the learning can be decreased.”

Colombatto offers two key strategies to avoid overreliance on ChatGPT: “First, come up with the answer yourself before entirely relying on something else. So, if you first try to generate something [yourself] and then edit it based on advice, I find that that helps kind of balance this [issue].” Additionally, she recommends fact-checking ChatGPT’s responses against other sources to ensure accuracy.

The future of AI research is promising, with many questions still to explore, including how AI affects learning, optimal ways to prompt AI for accurate responses, cultural differences in beliefs about AI, and longitudinal studies tracking changes in people’s perceptions of AI. “There’s not going to be a one-fit-for-all recipe for how to interact with AI,” Colombatto said. “It’s a quite exciting time for cognitive science to be looking at individual attitudes, but [we must] not take the population as a whole being all the same.”