Study proves artificial intelligence can respond to complex survey questions like a real human
Brigham Young University
Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you'll think they were taken by professional photographers.
Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.
To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model -- a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.
In
one experiment, the researchers created artificial personas by assigning the AI
certain characteristics like race, age, ideology, and religiosity; and then
tested to see if the artificial personas would vote the same as humans did in
2012, 2016, and 2020 U.S. presidential elections. Using the American National
Election Studies (ANES) for their comparative human database, they found a high
correspondence between how the AI and humans voted.
"I was absolutely surprised to see how accurately it matched up," said David Wingate, BYU computer science professor, and co-author on the study. "It's especially interesting because the model wasn't trained to do political science -- it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted."
In
another experiment, they conditioned artificial personas to offer responses
from a list of options in an interview-style survey, again using the ANES as
their human sample. They found high similarity between nuanced patterns in
human and AI responses.
This
innovation holds exciting prospects for researchers, marketers, and pollsters.
Researchers envision a future where artificial intelligence is used to craft
better survey questions, refining them to be more accessible and
representative; and even simulate populations that are difficult to reach. It
can be used to test surveys, slogans, and taglines as a precursor to focus
groups.
"We're
learning that AI can help us understand people better," said BYU political
science professor Ethan Busby. "It's not replacing humans, but it is
helping us more effectively study people. It's about augmenting our ability
rather than replacing it. It can help us be more efficient in our work with
people by allowing us to pre-test our surveys and our messaging."
And
while the expansive possibilities of large language models are intriguing, the
rise of artificial intelligence poses a host of questions -- how much does AI
really know? Which populations will benefit from this technology and which will
be negatively impacted? And how can we protect ourselves from scammers and
fraudsters who will manipulate AI to create more sophisticated phishing scams?
While
much of that is still to be determined, the study lays out a set of criteria
that future researchers can use to determine how accurate an AI model is for
different subject areas.
"We're
going to see positive benefits because it's going to unlock new
capabilities," said Wingate, noting that AI can help people in many
different jobs be more efficient. "We're also going to see negative things
happen because sometimes computer models are inaccurate and sometimes they're
biased. It will continue to churn society."
Busby
says surveying artificial personas shouldn't replace the need to survey real
people and that academics and other experts need to come together to define the
ethical boundaries of artificial intelligence surveying in research related to
social science.
Story
Source:
Materials provided by Brigham Young University. Original
written by Tyler Stahle. Note:
Content may be edited for style and length.