TechieTricks.com
Why we must be careful about how we speak of large language models Why we must be careful about how we speak of large language models
Check out all the on-demand sessions from the Intelligent Security Summit here. For decades, we have personified our devices and applications with verbs such... Why we must be careful about how we speak of large language models


Check out all the on-demand sessions from the Intelligent Security Summit here.


For decades, we have personified our devices and applications with verbs such as “thinks,” “knows” and “believes.” And in most cases, such anthropomorphic descriptions are harmless.

But we’re entering an era in which we must be careful about how we talk about software, artificial intelligence (AI) and, especially, large language models (LLMs), which have become impressively advanced at mimicking human behavior while being fundamentally different from the human mind.

It is a serious mistake to unreflectively apply to artificial intelligence systems the same intuitions that we deploy in our dealings with each other, warns Murray Shanahan, professor of Cognitive Robotics at Imperial College London and a research scientist at DeepMind, in a new paper titled, “Talking About Large Language Models.” And to make the best use of the remarkable capabilities AI systems possess, we must be conscious of how they work and avoid imputing to them capacities they lack.

Also read: OpenAI CEO admits ChatGPT risks. What now? | The AI Beat

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

Humans vs. LLMs

“It’s astonishing how human-like LLM-based systems can be, and they are getting better fast. After interacting with them for a while, it’s all too easy to start thinking of them as entities with minds like our own,” Shanahan told VentureBeat. “But they are really rather an alien form of intelligence, and we don’t fully understand them yet. So we need to be circumspect when incorporating them into human affairs.”

Human language use is an aspect of collective behavior. We acquire language through our interactions with our community and the world we share with them. 

“As an infant, your parents and carers offered a running commentary in natural language while pointing at things, putting things in your hands or taking them away, moving things within your field of view, playing with things together, and so on,” Shanahan said. “LLMs are trained in a very different way, without ever inhabiting our world.”

LLMs are mathematical models that represent the statistical distribution of tokens in a corpus of human-generated text (tokens can be words, parts of words, characters or punctuations). They generate text in response to a prompt or question, but not in the same way that a human would do.

Shanahan simplifies the interaction with an LLM as such: “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?” 

When trained on a large-enough corpus of examples, the LLM can produce correct answers at an impressive rate. Nonetheless, the difference between humans and LLMs is extremely important. For humans, different excerpts of language can have different relations with truth. We can tell the difference between fact and fiction, such as Neil Armstrong’s trip to the moon and Frodo Baggins’s return to the Shire. For an LLM that generates statistically likely sentences of words, these distinctions are invisible. 

“This is one reason why it’s a good idea for users to repeatedly remind themselves of what

LLMs really do,” Shanahan writes. And this reminder can help developers avoid the “misleading use of philosophically fraught words to describe the capabilities of LLMs, words such as ‘belief,’ ‘knowledge,’ ‘understanding,’ ‘self,’ or even ‘consciousness.’”

The blurring barriers

When we’re talking about phones, calculators, cars, etc., there is usually no harm in using anthropomorphic language (e.g., “My watch doesn’t realize we’re on daylight savings time”). We know that these wordings are convenient shorthands for complex processes. However, Shanahan warns, in the case of LLMs, “such is their power, things can get a little blurry.”

For example, there is a large body of research on prompt engineering tricks that can improve the performance of LLMs on complicated tasks. Sometimes, adding a simple sentence to the prompt, such as “Let’s think step by step,” can improve the LLM’s capability to complete reasoning and planning tasks. Such results can amplify “the temptation to see [LLMs] as having human-like characteristics,” Shanahan warns.

But again, we should keep in mind the differences between reasoning in humans and meta-reasoning in LLMs. For example, if we ask a friend, “What country is to the south of Rwanda?” and they respond, “I think it’s Burundi,” we know that they understand our intent, our background knowledge, and our interests. At the same time, they know our capacity and means to verify their answer, such as looking at a map or googling the term or asking other people.

However, when you ask the same question from an LLM, that rich context is missing. In many cases, some context is provided in the background by adding bits to the prompt, such as framing it in a script-like framework that the AI has been exposed to during training. This makes it more likely for the LLM to generate the correct answer. But the AI doesn’t “know” about Rwanda, Burundi, or their relation to each other.

“Knowing that the word ‘Burundi’ is likely to succeed the words ‘The country to the south of Rwanda’ is is not the same as knowing that Burundi is to the south of Rwanda,” Shanahan writes.

Careful use of LLMs in real-world applications

While LLMs continue to make progress, as developers, we should be careful how we build applications on top of them. And as users, we should be careful of how we think about our interactions with them. The framing of our mindset about LLMs and AI, in general, can have a great impact on the safety and robustness of their applications.

The expansion of LLMs might require a shift in the way we use familiar psychological terms like “believes” and “thinks,” or perhaps the introduction of new words, Shanahan said. 

“It may require an extensive period of interacting with, of living with, these new kinds of artifacts before we learn how best to talk about them,” Shanahan writes. “Meanwhile, we should try to resist the siren call of anthropomorphism.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link

techietr