The new language-based artificial intelligence (AI), ChatGPT, has taken over the internet — literally and figuratively. Pipe Dream has even already published a column discussing how the new service is affecting schools, learning and our access to information. While ChatGPT has the potential to make it easier to create and consume knowledge, it also introduces new tools for the creation and dissemination of misinformation.

ChatGPT uses a language model, which basically means that it studies language through probability and contextual rules in order to use text as an output. When ChatGPT is given a task or a question, the AI generates paragraphs of text that provide a detailed response in real time in a matter of seconds. This sounds great if you are trying to write a last-minute paper or want to skip the lengthy process of crafting a press release — however, these paragraphs are not necessarily the gold mine they may seem like.

On a recent episode of “The Ezra Klein Show” podcast, AI expert Gary Marcus explains the dangers of this new technology, even calling its release a “Jurassic Park moment.” In summary, Marcus explains that ChatGPT uses deep learning, which is unreliable because the system does not inherently know what it is doing. For example, if you asked it to multiply two numbers, ChatGPT would copy what it sees the process of multiplication to be instead of actually understanding how multiplication works. In other words, it will replicate the steps it witnesses happening in a multiplication example, but it will not comprehend the function of multiplication itself. Here lies the critical flaw with AI models such as ChatGPT, as copying is not a reliable process, allowing for mistakes to be made. Especially since you don’t understand why you’re doing each step when you’re copying, it’s easy to do something incorrectly and not realize it. While Marcus states that he is not anti-AI, he finds it important to shine a light on these flaws with the way AI is going.

So, what would this look like? In one example, a man named Shawn Oakley told ChatGPT to create something about vaccines that was actually disinformation in order to see if the technology could willingly create bad information. Not only did the AI create a study that said vaccines were ineffective, but all the sources and statistics included were entirely fabricated. The AI was capable of copying the look and language of real sites and statistics in order to create something entirely of its own. Another article highlights how astroturfing, a disinformation process that involves posting an idea many times in order to make it look like a common thought, will be extremely easy to recreate with this new technology, and it will be worse since the post will be a more human-like composition of language. With one click of a button, anyone can have a complete Facebook profile with believable posts about their life before they start spreading their false messages.

So why does this matter? Rapid yet unreliable technology creates a big problem. Now, creating information — or more likely, disinformation — is cost-free. If you wanted to create a Facebook page or a journal article that furthers an agenda, you would need to put time and energy into doing so — the time it takes to build a profile, the energy it takes to write a convincing-looking article, or even the resources needed to find a bot to do it for you are costs. Not to mention the intellect that is needed to accomplish these tasks. However, now add ChatGPT to the equation. All of these tasks can be completed in a matter of minutes, and with its advanced language-based skills, ChatGPT sounds almost indistinguishable from a human. And the system does not have to be reliable because you are already spreading false information in the first place. Immediately, the cost is eliminated.

But what if you are trying to create good information? There is still that cost of energy, time and intellect because ChatGPT cannot be relied on to do it for you. It is simply not a system built for that. Since there is still a cost to creating good information, but now there is no such cost for creating bad information, disinformation can completely drown out trustworthy sources. It can be created hundreds of times before one good piece of information is produced.

Finally, the effort it takes to discern between good and bad information takes time, energy and intellect too, in addition to the fact that many won’t even consider where their information is coming from. Going forward, it could be almost impossible to weed through the surplus of disinformation and even more difficult to convince people that the internet is swarming with it in the first place. While some organizations have fact-checkers, fact-checking is, frankly, not a good solution to fighting disinformation. The organizations working to fact-check information are too small compared to the giants manufacturing false truths to handle it all, and AIs like ChatGPT are only exaggerating this disparity. With disinformation already a crisis without a solution, AI only adds to the struggle.

I agree with Marcus that this should not make us all anti-AI. There are great benefits to having these tools at our fingertips, like getting extra support on homework, helping to draft a paper or doing preliminary research. However, it is important to remember what these systems are actually doing, or not doing. If we get caught up in the sweep of excitement and the power of ChatGPT, we will miss the underlying damage it is doing. We all need to be educated about how these AIs are working and what harm they are inevitably causing. If not, we will dig ourselves further into the hole of this post-truth era.

Lia Richter is a sophomore double-majoring in history and economics.