Bryan Cranston, OpenAI limits deepfake after SAG-AFTRA pressure

Bryan Cranston, OpenAI limits deepfake after SAG-AFTRA pressure

OpenAI announced in a joint statement on Monday that it will be working with Bryan Cranston. SAG-AFTRAand other actors to protect against deepfakes on teams artificial intelligence Video creation app Sora.

The “Breaking Bad” and “Malcolm in the Middle” actor raised concerns after Sora 2 launched in late September after unauthorized AI-generated clips using his voice and likeness appeared on the app, the Screen Actors Guild-American Federation of Television and Radio Artists said in a post on X.

“I am grateful that OpenAI has revised its policy and guidelines and hope that they, and all companies involved in this work, will respect our personal and professional right to manage the replication of our voice and image,” Cranston said in a statement.

Along with SAG-AFTRA, OpenAI said it will collaborate with the United Talent Agency, which Cranston represents, the Association of Talent Agents and Creative Artists Agencies, to strengthen the railings between unaccredited AI generations.

CAA and UTA have previously slammed OpenAI for its use Copyrighted Literally, Sora is at risk to its customers and intellectual property.

OpenAI’s videos had to be blocked Martin Luther King Jr Sora last week at the request of the King’s estate after users created “disrespectful depictions” of the civil rights leader.

Zelda Williams, daughter of the late comedian Robin Williams, asked people to stop sending AI-generated videos of her father shortly after the release of Sora 2.

OpenAI’s approach to copyright restrictions and other equality-related issues has evolved since Sora 2’s September 30 launch.

On October 3, CEO Sam Altman Updated Sora’s opt-out policy, which had previously allowed IP to be used unless studios requested their content not be used, allowed rights holders to have “more fine-grained control over the creation of the characters”.

At launch, Sora was required to opt-in to use a person’s voice and likeness, though OpenAI said it is now “committed to promptly responding to any complaints it receives.”

The company reiterated its support for the NO FAKES Act, a federal bill passed designed to protect against unauthorized AI-generated replicas of people’s voices or visual likenesses.

“OpenAI is committed to protecting artists from abuse of their voice and likeness,” Altman said in a statement. “We were an early supporter of the NO FAKES Act when it was enacted last year and will always stand behind artists’ rights.”

108210376-OpenAI_Hollywood_C_Clean Bryan Cranston, OpenAI limits deepfake after SAG-AFTRA pressure

Share this content:

Post Comment