By Ben Meyer
When you ask ChatGPT a question, do you ever wonder about its thought process? You type in a prompt, some weird computer stuff happens while it loads, then it gives you an answer. In computer science terms, your prompt is called the “input,” ChatGPT’s response is called the “output,” and the weird computer stuff happens in what’s called a “black box.” You can’t see into the black box, but you know something’s happening in there, and it creates the output.
That black box may seem overwhelming at first, and that’s by design – but the truth is that it’s not all that complicated. In fact, it can be explained by one word: probability. When ChatGPT generates a response, it doesn’t think through your question and try to give a proper answer – it just generates the words that are most likely to come next, according to its data.
In order to determine probability, you need data. For an AI like ChatGPT which generates words, that data consists of words written by humans. That includes books, articles, plays, essays, poems, social media posts – anything you can think of. Essentially, ChatGPT “learns” English by reading everything on the internet – basically everything ever written. When it writes something for you, it’s just finding the most probable output based on the data. In other words, it may be copying something that it’s seen somewhere on the internet. It’s impossible to imagine the sheer scope of all the writing that ChatGPT uses when it writes its responses, but unfortunately, it is possible to imagine the legal, ethical, and practical issues that this causes.
If you’ve ever posted anything online, there’s a chance that ChatGPT has your words – your intellectual property – in its dataset. Were you asked permission for ChatGPT to use your writing? If not, know that you’re far from alone. In fact, the New York Times is currently engaged in a lawsuit against OpenAI (the creator of ChatGPT), alleging that OpenAI used articles from the New York Times without permission. Nothing has been legally proven as of this moment, but the possibility exists that ChatGPT could copy anything from anywhere on the internet, at any time, without asking permission. It’s a scary possibility.
It also means that, when you ask ChatGPT a question, it could be getting its information from anywhere on the internet. Maybe it’s a scholarly journal article with trustworthy information, or maybe it’s some random blog post from 15 years ago written by a crazed conspiracy theorist. There’s no way to know. Then, when you ask ChatGPT to write an essay for you, its writing is a mix between the journal article, the conspiracy theorist, and nearly everything else ever written in English. However, it’s written not by human, but by machine, meaning it has none of the heart.
The bottom line is that ChatGPT, above all, isn’t human. It’s a machine that doesn’t know fact from fiction, and it doesn’t know good writing from bad. It’s certainly efficient and easy to use, but that efficiency comes at the expense of quality, truthfulness, ethicality, and possibly legality. That’s not even considering the environmental costs. OpenAI keeps all the inner workings of ChatGPT in the black box for a reason, and it’s to keep these difficult ethical questions out of the minds of consumers – that includes you and me. So, the next time you put that black box to use, I’d like you to ask yourself:
Is ChatGPT really the right answer?
Be First to Comment