Matteo Wong is a staff writer for The Atlantic. This appeared in Atlantic Intelligence June 20, 2025. The full article is here.
Matteo Wong, The Atlantic - There are, roughly, three genres of common experiences with generative-AI tools. The software does something brilliant, like solving a hard math problem; makes your life easier, perhaps by taking notes during a video call; or flubs entirely, insisting that the year is 2024, as a Google AI Overview told me as I wrote this newsletter.
On the strength of the first two categories, Silicon Valley is leveraging AI to remake the web, economy, education, workings of government, and seemingly anything else within reach. Billions of people are pushed to interact with generative AI when searching the web, shopping, or using social media—yet, as I wrote this week, the bots remain, well, janky. They crash, give false information, exhibit gross biases, and are vulnerable to simple cyberattacks. And for all the fixes that AI labs put into place, this unreliability is not a flaw but a feature of how chatbots are designed. “These bots can never guarantee accuracy,” I wrote. “The embarrassing failures are a feature of AI products, and thus they are becoming features of the broader internet.”
To be clear, ChatGPT and everything that has come since have been awe inducing—and therein lies the problem: Generative-AI products are at once remarkably powerful, remarkably useful, and remarkably unreliable. It is “a technology that works well enough for users to become dependent,” I wrote, “but not consistently enough to be truly dependable.”
No comments:
Post a Comment