ChatGPT's taste for literary nonsense sparks alarm
Inquirer

ChatGPT's taste for literary nonsense sparks alarm

PARIS, France — OpenAI’s GPT models can often be fooled into declaring that “pseudo-literary” nonsense is great, a German researcher has found. Christoph Heilig said he discovered that they consistently rated “nonsense” higher, including when their so-called “reasoning” features were activated, which could have stark implications for the development of artificial intelligence. “It’s very important that we talk about what happens when we don’t build AI as a neutral, robotic helper or assistant” and seek to instil human-like aesthetic and moral judgements, the academic at Munich’s Ludwig Maximilian University told AFP. His research presented the models with increasingly far-fetched variations […]... Keep on reading: ChatGPT's taste for literary nonsense sparks alarm

Go to News Site