2 years, 2 months ago

AI chatbots learned to write before they could learn to think

The internet can't stop talking about an AI program that can write such artful prose that it seems to pass the Turing Test. Mayer writes: What makes you think that LLMs "do not understand what words mean, and consequently cannot use common sense, wisdom, or logical reasoning to distinguish truth from falsehood. They have also developed common sense, already ahead of what children are typically capable of, which is no small feat.… Mayer proposed an experiment that might "prove" that large language models like GPT-3 can fact-check themselves, in a sense, illustrating that they have real intelligence, and are not merely parroting other things written online that they have absorbed: Finally, LLMs like ChatGPT have the amazing ability to fact-check themselves! A December 2022 New York Times article reported that, "Three weeks ago, an experimental chatbot called ChatGPT made its case to be the industry's next big disrupter." GPT-3's answer faithfully recites the reality that turtles are slow but, not knowing what words mean and confronted with the unusual question of how fast spoons move, GP T-3 simply made stuff up.

Discover Related