February 27, 2024

Some scary stuff about articifial intelligence

 Axios -  Not since the atomic bomb has so much money been spent in so little time on a technology its own creators admit could … wipe out our entire species, Axios' Jim VandeHei and Mike Allen write.... As technological savants crank out new large-language-model wonders, it's worth pausing to hear their own warnings. Sometimes our world changes so fast, in so many head-spinning ways, that it's impossible to fully capture the wildness — and weirdness — of it all.

  • It's easy to think it's all hype when Google's AI tool, Gemini, portrays a white American founding father as Black, or when an early ChatGPT model urged a reporter to leave his wife.  But even skeptics believe that when some of the biggest companies in the world pour this much coin into one tech, they're sure to will it into something very powerful.

Let's set aside for one column whether generative AI will save or destroy humanity and focus on the actual words from actual creators of it:

  • Dario Amodei, who has raised $7.3 billion for his AI start-up Anthropic after leaving OpenAI over concerns over ethics, says there's a 10% to 25% chance that AI technology could destroy humanity. But if that doesn't happen, he said, "it'll go not just fine, it'll go really, really great."
  • Fei-Fei Li, a renowned AI scholar who is co-director of Stanford's Human-Centered AI Institute, told MIT Technology Review last year that AI's "catastrophic risks to society" include practical, "rubber meets the road" problems — misinformation, workforce disruption, bias and privacy infringements.
  • Geoffrey Hinton — known as "The Godfather of AI," and for 10 years one of Google's AI leaders — warns anyone who'll listen that we're creating a technology that in our lifetimes could control and obliterate us.

OK, maybe that's nuts. But Ilya Sutskever, a pioneer scientist at OpenAI, has warned the exact same thing. He became fearful that the technology could wipe out humanity," The New York Times reported nonchalantlyOpenAI CEO Sam Altman takes the stance that AI will probably be great — but still warns we must be careful we don't destroy humanity. Altman, along with the three men above, was among the signatories to this chilling one-sentence statement last year: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." LOTS MORE

 

No comments: