If you’ve heard anything about the relationship between Big Tech and climate change, it’s probably that the data centers that power our online lives use a mind-boggling amount of power. And some of the newest energy hogs on the block are artificial intelligence tools like ChatGPT. Some researchers suggest that ChatGPT alone might use as much power as 33,000 U.S. households in a typical day, a number that could balloon as the technology becomes more widespread.
The staggering emissions add to a general tenor of panic driven by headlines about AI stealing jobs, helping students cheat, or, who knows, taking over. Already, some 100 million people use OpenAI’s most famous chatbot on a weekly basis, and even those who don’t use it likely encounter AI-generated content often. But a recent study points to an unexpected upside of that wide reach: Tools like ChatGPT could teach people about climate change, and possibly shift deniers closer to accepting the overwhelming scientific consensus that global warming is happening and caused by humans.
In a study recently published in the journal Scientific Reports, researchers at the University of Wisconsin-Madison asked people to strike up a climate conversation with GPT-3, a large language model released by OpenAI in 2020. (ChatGPT runs on GPT-3.5 and 4, updated versions of GPT-3). Large language models are trained on vast quantities of data, allowing them to identify patterns to generate text based on what they’ve seen, conversing somewhat like a human would. The study is one of the first to analyze GPT-3’s conversations about social issues like climate change and Black Lives Matter. It analyzed the bot’s interactions with more than 3,000 people, mostly in the United States, from across the political spectrum. Roughly a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.
That doesn’t mean they enjoyed the experience, though. They reported feeling disappointed after chatting with GPT-3 about the topic, rating the bot’s likability about half a point or lower on a 5-point scale. That creates a dilemma for the people designing these systems, said Kaiping Chen, an author of the study and a professor of computation communication at the University of Wisconsin-Madison. As large language models continue to develop, the study says, they could begin to respond to people in a way that matches users’ opinions — regardless of the facts.
“You want to make your user happy, otherwise they’re going to use other chatbots. They’re not going to get onto your platform, right?” Chen said. “But if you make them happy, maybe they’re not going to learn much from the conversation.”
Prioritizing user experience over factual information could lead ChatGPT and similar tools to become vehicles for bad information, like many of the platforms that shaped the internet and social media before it. Facebook, YouTube, and Twitter, now known as X, are awash in lies and conspiracy theories about climate change. Last year, for instance, posts with the hashtag #climatescam have gotten more likes and retweets on X than ones with #climatecrisis or #climateemergency.
“We already have such a huge problem with dis- and misinformation,” said Lauren Cagle, a professor of rhetoric and digital studies at the University of Kentucky. Large language models like ChatGPT “are teetering on the edge of exploding that problem even more.”
The University of Wisconsin-Madison researchers found that the kind of information GPT-3 delivered depended on who it was talking to. For conservatives and people with less education, it tended to use words associated with negative emotions and talk about the destructive outcomes of global warming, from drought to rising seas. For those who supported the scientific consensus, it was more likely to talk about the things you can do to reduce your carbon footprint, like eating less meat or walking and biking when you can.
What GPT-3 told them about climate change was surprisingly accurate, according to the study: Only 2 percent of its responses went against the commonly understood facts about climate change. These AI tools reflect what they’ve been fed and are liable to slip up sometimes. Last April, an analysis from the Center for Countering Digital Hate, a U.K. nonprofit, found that Google’s chatbot, Bard, told one user, without additional context: “There is nothing we can do to stop climate change, so there is no point in worrying about it.”
It’s not difficult to use ChatGPT to generate misinformation, though OpenAI does have a policy against using the platform to intentionally mislead others. It took some prodding, but I managed to get GPT-4, the latest public version, to write a paragraph laying out the case for coal as the fuel of the future, even though it initially tried to steer me away from the idea. The resulting paragraph mirrors fossil fuel propaganda, touting “clean coal,” a misnomer used to market coal as environmentally friendly.
There’s another problem with large language models like ChatGPT: They’re prone to “hallucinations,” or making up information. Even simple questions can turn up bizarre answers that fail a basic logic test. I recently asked ChatGPT-4, for instance, how many toes a possum has (don’t ask why). It responded, “A possum typically has a total of 50 toes, with each foot having 5 toes.” It only corrected course after I questioned whether a possum had 10 limbs. “My previous response about possum toes was incorrect,” the chatbot said, updating the count to the correct answer, 20 toes.
Despite these flaws, there are potential upsides to using chatbots to help people learn about climate change. In a normal, human-to-human conversation, lots of social dynamics are at play, especially between groups of people with radically different worldviews. If an environmental advocate tries to challenge a coal miner’s views about global warming, for example, it might make the miner defensive, leading them to dig in their heels. A chatbot conversation presents more neutral territory.
“For many people, it probably means that they don’t perceive the interlocutor, or the AI chatbot, as having identity characteristics that are opposed to their own, and so they don’t have to defend themselves,” Cagle said. That’s one explanation for why climate deniers might have softened their stance slightly after chatting with GPT-3.
There’s now at least one chatbot aimed specifically at providing quality information about climate change. Last month, a group of startups launched “ClimateGPT,” an open-source large language model that’s trained on climate-related studies about science, economics, and other social sciences. One of the goals of the ClimateGPT project was to generate high quality answers without sucking up an enormous amount of electricity. It uses 12 times less computing energy than a comparable large language model, according to Christian Dugast, a natural language scientist at AppTek, a Virginia-based artificial intelligence company that helped fine-tune the new bot.
ClimateGPT won’t be quite ready for the general public “until proper safeguards are tested,” according to its website. Despite the problems Dugast is working on addressing — the “hallucinations” and factual failures common among these chatbots — he thinks it could be useful for people hoping to learn more about some aspect of the changing climate.
“The more I think about this type of system,” Dugast said, “the more I am convinced that when you’re dealing with complex questions, it’s a good way to get informed, to get a good start.”
This story was originally published by Grist with the headline What happened when climate deniers met an AI chatbot? on Feb 1, 2024.