AI’s black boxes just got a little less mysterious
By
Binu Mathew
One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.
That’s because large language models, the type of AI systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.
Instead, these systems essentially learn on their own, by ingesting vast amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.