ChatGPT’s taste for literary nonsense sparks alarm

OpenAI’s GPT models can often be fooled into declaring that “pseudo-literary” nonsense is great, a German researcher has found.

Christoph Heilig said he discovered that they consistently rated “nonsense” higher — including when their so-called “reasoning” features were activated — which could have stark implications for the development of artificial intelligence.

Read more

You may also like

Comments are closed.

More in IT