Large language models’ ability to generate text also lets them plan and reason
1 year, 11 months ago

Large language models’ ability to generate text also lets them plan and reason

The Economist  

Listen to this story. Modern large language models can generate them all, though homework-shirkers should beware: the models may get some facts wrong, and are prone to flights of fancy that their creators call “hallucinations”. An LLM trained on large amounts of text, says Nathan Benaich of Air Street Capital, an AI investment fund, “basically learns to reason on the basis of text completion”. P a LM - E, created by researchers at Google, uses an “embodied” LLM, trained using sensor data as well as text, to control a robot. Dr Liang notes that today’s LLM s, which are based on the so-called “transformer” architecture developed by Google, have a limited “context window”—akin to short-term memory.

History of this topic

Can AI think on its own beyond the training parameters? Study finds evidence
1 month, 3 weeks ago
Nobody in their right mind will use genAI, LLMs in the next 5 years: Meta chief AI scientist Yann LeCun
2 months, 1 week ago

Discover Related