ChatGPT’s taste for literary nonsense sparks alarm
By
Binu Mathew
OpenAI’s GPT models can often be fooled into declaring that “pseudo-literary” nonsense is great, a German researcher has found.
Christoph Heilig said he discovered that they consistently rated “nonsense” higher — including when their so-called “reasoning” features were activated — which could have stark implications for the development of artificial intelligence.
