ICT’s Dr. Jonathan Gratch, Director, Affective Computing Lab, was quoted in The Atlantic talking about ChatGPT
Illustration by The Atlantic. Source: Getty.
Shh, ChatGPT. That’s a Secret.
Your chatbot transcripts may be a gold mine for AI companies writes Assistant Editor – Science, Technology & Health, Lila Shroff, in the latest edition of The Atlantic.
Shroff quotes ICT’s Jonathan Gratch throughout the piece, talking about how generative AI can fuel future targeted ads and use what he calls “influence tactics.”
Here are extracts from the story:
Gratch doesn’t think technology companies have figured out how best to mine user-chat data. “But it’s there on their servers,” he told me. “They’ll figure it out some day.” After all, for a large technology company, even a 1 percent difference in a user’s willingness to click on an advertisement translates into a lot of money.
People’s readiness to offer up personal details to chatbots can also reveal aspects of users’ self-image and how susceptible they are to what Gratch called “influence tactics.” In a recent evaluation, OpenAI examined how effectively its latest series of models could manipulate an older model, GPT-4o, into making a payment in a simulated game. Before safety mitigations, one of the new models was able to successfully con the older one more than 25 percent of the time. If the new models can sway GPT-4, they might also be able to sway humans. An AI company blindly optimizing for advertising revenue could encourage a chatbot to manipulatively act on private information.
When people converse with one another, we engage in “impression management,” says Jonathan Gratch, a professor of computer science and psychology at the University of Southern California—we intentionally regulate our behavior to hide weaknesses. People “don’t see the machine as sort of socially evaluating them in the same way that a person might,” he told me.
Read the full article here (requires subscription)