NY Times - This month, OpenAI, the maker of the popular ChatGPT chatbot, graced the internet with a technology that most of us probably weren’t ready for. The company released an app called Sora, which lets users instantly generate realistic-looking videos with artificial intelligence by typing a simple description, such as “police bodycam footage of a dog being arrested for stealing rib-eye at Costco.”
Sora, a free app on iPhones, has been as entertaining as it has been disturbing. Since its release, lots of early adopters have posted videos for fun, like phony cellphone footage of a raccoon on an airplane or fights between Hollywood celebrities in the style of Japanese anime. (I, for one, enjoyed fabricating videos of a cat floating to heaven and a dog climbing rocks at a bouldering gym.)
Yet others have used the tool for more nefarious purposes, like spreading disinformation, including fake security footage of crimes that never happened.
The arrival of Sora, along with similar A.I.-powered video generators released by Meta and Google this year, has major implications. The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it. Society as a whole will have to treat videos with as much skepticism as people already do words.
No comments:
Post a Comment