Inaccuracies spewed by artificial intelligence chatbots pose dangers threatening areas of American society such as elections and education, warn tech experts from across a wide range of industries.
Dubbed “hallucinations” in the AI world, chatbots like ChatGPT and Google’s Bard can present inaccurate information as fact, something consumers should be cautious of, said experts.
“We should always be wary of chatbot ‘hallucinations’ and biases that may be present in the technology,” James Czerniawski, a senior policy analyst at Americans for Prosperity, headquartered in Virginia, told Fox News Digital.
“If a technology is inadvertently or intentionally misrepresenting certain viewpoints, that presents a potential opportunity to mislead users about actual facts about events, positions of individuals, or their reputations more broadly speaking.”
The threats come largely from AI’s ability to blur the lines between fact and fiction, and “misinformation” is the biggest danger facing consumers, said Christopher Alexander, the chief communications officer of Liberty Blockchain, based in Utah.
AI could reflect the “values and beliefs” of those who built the algorithm, Alexander said, and those values and beliefs may not align with the chatbot consumers.
Artificial Intelligence words are seen in this illustration taken March 31, 2023. (REUTERS/Dado Ruvic/Illustration)
Elon Musk addressed the political risk this week in a wide-ranging interview with Fox News.
“Even if you say that AI doesn’t have agency, well, it’s very likely that people will use the AI as a tool in elections,” Musk told host Tucker Carlson.
“We should always be wary of chatbot ‘hallucinations’ and biases that may be present in the technology.” — James Czerniawski
“And then, you know, if AI’s smart enough, are they using the tool or is the tool using them? So I think things are getting weird, and they’re getting weird fast,” Musk continued.
The power to harness opinion through social media platforms has already been evident through the last few election cycles — most notably in 2020, when it was revealed that Twitter censored stories about Hunter Biden’s laptop.
Tucker Carlson interviews Elon Musk on “Tucker Carlson Tonight.” (Fox News)
Artificial intelligence hallucinations could incite an exponential explosion in the ability of tech giants to sway political misinformation, including the use of “deep fakes” to portray people — for good or bad — in artificially manufactured situations.
“It looks exactly like Trump or Biden. They sound exactly like Trump or Biden,” said Israeli author and intellectual Yuval Noah Harari of deep-fake images and videos.
“But you can’t trust it. Because you now know, well, they can generate anything.”
“(AI) could be programmed to lie to us for political effect.” — Tucker Carlson
“The deeper problem is not simply that (AI) will become autonomous and turn us all into slaves, but that it will control our understanding of reality and do it in a really dishonest way,” Carlson said in his Musk interview.
“It could be programmed to lie to us for political effect.”
Israeli historian and “Sapiens” author Yuval Noah Harari warned that the trustworthiness of various institutions will be challenged by artificial intelligence. (Fox News)
The same challenge — the ability to separate fact from fiction — will be compounded in academia, where both educators and students could be tempted to let artificial intelligence think for them.
Much as many young drivers can no longer navigate their way from one location to the next without being told how to do so by GPS, students run the risk of negotiating college without ever learning — or even thinking.
Marc Beckman, an adjunct professor and senior fellow at New York University (NYU), told Fox News Digital that there will always be a tension built into the relationship between an educator and a student who wants to be creative, exemplified in the discourse surrounding AI products like ChatGPT.
Teachers want to let their students’ wings fly but also avoid having them take shortcuts that could hinder their education, he said. He added that restrictions imposed on the curious learner could have a “chilling effect” on the accelerated pace of innovation needed to compete and thrive in the near future.
Sam Altman, chief executive officer and co-founder of OpenAI, speaks during an event at the Microsoft headquarters in Redmond, Washington, on Tuesday, Feb. 7, 2023. OpenAI’s chatbot ChatGPT is at the center of controversy about the trustworthiness of artificial intelligence platforms. (Chona Kasinger/Bloomberg via Getty Images)
“Me, certainly, as a professor, I’m going to create certain mechanisms that will essentially push my students to naturally build a strong depth of knowledge and give them that foundation without the technology,” said Beckman.
Education has already proven an early battleground over artificial intelligence. Students have begun using AI as a shortcut to getting work done — and educators have responded with AI tools of their own meant to sniff out work that’s been plagiarized or produced by sources other than students.
The fight over disinformation has already produced at least one cheating scandal on college campuses.
William Quarterman, a student of the University of California Davis, was flagged earlier this month for possibly using an AI program to cheat by a professor using another AI program to hunt down cheaters.
The UC Davis logo with a soccer game and bike riders in the background. University of California at Davis, taken on Feb. 2, 2015. (Getty Images)
The student was cleared of wrongdoing, but only after facing school authorities over charges of academic dishonesty.
One of the ways to combat disinformation, Harari said, is to reinforce the need for trustworthy institutions.
“So, what can you trust? You trust the publisher. You trust the institution.” — Yuval Noah Harari
“So, what can you trust? You trust the publisher. You trust the institution,” Harari said.
Hallucinations do offer hope for the future, too, amid fears of mistrust.
They “can be harnessed and used to our advantage,” claimed Phil Siegel, the founder of CAPTRS, a nonprofit that uses simulation gaming and artificial intelligence to improve societal disaster preparedness.
“While hallucinations are bad when precision, accuracy and truth are vital — like in architecture, construction or journalism, for example — they can be used as a force for good when creativity is the goal, through presenting the unforeseen, or the unimagined, should they ever become reality,” Siegel told Fox News Digital.
The OpenAI logo arranged on a laptop in Beijing, China, on Friday, Feb. 24, 2023. The rally in Chinese artificial intelligence stocks is showing further signs of cooling amid media reports of authorities banning access to OpenAI’s ChatGPT service. (Bloomberg via Getty Images)
“Having an AI that can generate thousands of possible scenarios, that have yet to be thought of by humans, can help officials and leaders devise strategies to address them before they happen.”
The challenge now, experts say, is to ensure that the promises posed by artificial intelligence outweigh the potential dangers.
And right now, as humanity grapples with the future of AI, the challenges are profound.
Said Musk in his Fox News interview with Carlson, “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential, however small one may regard that probability, but it is not trivial; it has the potential of civilizational destruction.”