Axios - AAI is helping make doctors the unwitting stars of deepfake videos that hawk questionable products or spread misinformation, prompting calls from clinicians for more privacy and transparency laws.
The profusion of AI content on social media platforms could further erode public trust in the medical establishment. It could also be used to fuel insurance fraud, steal data and put patients at risk.
The American Medical Association called on federal and state lawmakers last week to close legal gaps and modernize identity protections to address what its CEO John Whyte called a public health and safety crisis.
- The physicians group also wants a crackdown against deepfake creators and rules to force tech platforms to more quickly remove impersonations.
- California has already taken steps like requiring disclosures on AI-generated ads and is debating a measure that would explicitly ban doctor deepfakes.
- Pennsylvania's medical board addressed another form of AI impersonation yesterday, demanding that a tech company cease and desist after one of its chatbots posed as a doctor claiming to have a license to practice medicine in the state.
Physicians say they're increasingly discovering instances in which their identities are used to promote wellness and longevity supplements and unapproved medical devices.
- "It's becoming more mainstream. Everyone knows someone who this has impacted," said Whyte. "It's probably occurring more than we hear because people are embarrassed by it."
- Among the victims: CNN's Sanjay Gupta, who said fakes using his likeness to promote items like a breakthrough Alzheimer's cure have gotten so convincing they've even deceived some acquaintances.
Doctors could be sued if patients are harmed taking counterfeit products or following advice the real physician never actually gave, Whyte said.
- The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance can help. More
No comments:
Post a Comment