ChatGPT can tell jokes, even write articles. But only humans can detect his fluid bullshit

<span>Photo: NurPhoto/Getty Images</span>” src=”https://s.yimg.com/ny/api/res/1.2/kKbc91rsYl7z9jeM10QhVA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/92b2786974214cc83631ea327src=data” data” “https://s.yimg.com/ny/api/res/1.2/kKbc91rsYl7z9jeM10QhVA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/92b2786974214cc83631ea27cc8f8320″/></div>
</div>
</div>
<p><figcaption class=Photography: NurPhoto/Getty Images

As the capabilities of natural language processing technology continue to advance, there is growing hype around the potential of chatbots and conversational AI systems. One such system, ChatGPT, claims to be able to engage in natural, human-like conversation and even provide useful information and advice. However, there are valid concerns about the limitations of ChatGPT and other conversational AI systems, and their ability to truly replicate human intelligence and interaction.

No, I didn’t write that. It was actually written by ChatGPT itself, a conversational AI software, after I asked it to create “an opening paragraph for an article skeptical of the capabilities of ChatGPT in the style of Kenan Malik”. I could quibble over solid prose, but it’s an impressive attempt. And it’s not hard to see why there’s been such a buzz, indeed infatuationabout the latest version of the chatbot since its release a week ago.

Powered by huge amounts of human-created text, ChatGPT looks for statistical regularities in this data, learns which words and phrases are associated with others, and is thus able to predict which words should follow in a given sentence, and how the sentences fit together. The result is a machine that can convincingly mimic human language.

He can write grade A essays, but he will also tell you that crushed glass is a useful health supplement.

This mimicry capability allows ChatGPT to write trials and poetry, imagine jokesformulate codedand answer the questions whether to a child or an expert. And to do it so well that many over the past week have both celebrated and panicked. “Trials are dead” wrote cognitive scientist Tim Kietzmann, an amplified view by many scholars. Others claim it’s okay end google as a search engine. And the program itself thinks it might be able to replace humans in jobs from insurance agent to court reporter.

And yet, the chatbot that can write grade A essays will also tell you that if a woman can have a baby in nine months, nine women can have a baby in one month; this kilo of beef weighs more than a kilo of compressed air; and that crushed glass is useful health supplement. It can invent facts and reproduce many of the prejudices of the human world on which it is formed.

ChatGPT can be so compelling that Stack Overflow, a platform for developers to get help writing code, has banned users from posting answers generated by the chatbot. “The main problem,” the mods wrote, “is that even though the responses produced by ChatGPT have a high error rate, they generally look good.” Or, as another reviewer put it, it’s fluid bullshit.

Some of these issues will be resolved over time. Every conversation involving ChatGPT is part of the database used to improve the program. The next iteration, GPT-4, is planned for next year, and will be more convincing and make fewer errors.

However, beyond this incremental improvement there is also a fundamental problem facing any form of artificial intelligence. A computer manipulates symbols. His program specifies a set of rules for transforming one string of symbols into another or recognizing statistical patterns. But it does not specify what these symbols or patterns mean. For a computer, meaning doesn’t matter. ChatGPT “knows” (most of the time at least) what feels meaningful to humans, but not what’s meaningful to itself. He is, in the words of cognitive scientist Gary Marcus, an “impersonator who doesn’t know what he’s talking about.”

Humans, while thinking, speaking, reading and writing, also manipulate symbols. For humans, however, unlike computers, meaning is everything.

When we communicate, we communicate meaning. What matters is not just the exterior of a string of symbols but also its interior, not just the syntax but the semantics. Meaning for humans comes from our existence as social beings, embodied and grounded in the world. I make sense of myself only insofar as I live in and relate to a community of other beings who think, feel and speak.

ChatGPT not only reveals the advances of AI but also its limits

Of course, humans lie, manipulate, attract and promote conspiracy theories that can have devastating consequences. This is also part of being social beings. But we recognize humans as flawed, as potentially devious, or bullshit, or manipulative.

However, we tend to view machines as objective and unbiased, or potentially bad if they’re sensitive. We often forget that machines can be biased or just plain wrong, because they’re not grounded in the world like humans are, and because they have to be programmed by humans and trained on data gathered by humans. humans.

We also live in a time where surface often matters more than depth of meaning. A time when politicians too often pursue a policy not because it is necessary or right in principle, but because it does well in focus groups. A time when we often ignore the social context of people’s actions or speeches and are dazzled by literalness. A time when students are, in the words of writer and educator John Warner, “rewarded for…regurgitating existing information” into a system that “privileges[s] accuracy at the surface level” rather than “developing[ing] their writing skills and their critical thinking”. That ChatGPT seems to write grade A essays so easily, he suggests, “is mostly a commentary on what we value.”

None of this is to deny the remarkable technical achievement that is ChatGPT, or how amazing it is to interact with it. It will undoubtedly become a useful tool, helping to improve both human knowledge and creativity. But we have to keep perspective. ChatGPT not only reveals the progress made in AI, but also its limitations. It also helps illuminate both the nature of human cognition and the character of the contemporary world.

More immediately, ChatGPT also raises questions about how to relate to machines that are much better at bullshitting and spreading misinformation than humans themselves. Given the difficulties in combating human disinformation, these are not matters that should be delayed. We shouldn’t become so mesmerized by the persuasiveness of ChatGPT that we forget the real problems these programs can cause.

• Kenan Malik is an Observer columnist

  • Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, please email it to us at observer.letters@observer.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *