New Jersey man sues AI company over fake nude images created by Clothoff

New Jersey man sues AI company over fake nude images created by Clothoff

NewYou can listen to Fox News articles now!

A New Jersey teenager has filed a massive lawsuit against the company artificial intelligence (AI) “undressing” tool that allegedly created a fake nude image of her. The case has drawn national attention because it shows how AI can invade privacy in harmful ways. The lawsuit was filed to protect students and teenagers who share photos online and to show how easily AI tools can exploit their images.

Sign up for my free CyberGuy report
Get my best tech tips, urgent security notices and exclusive deals straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — for free when you join me CYBERGUY.COM Newspaper

Leaked meta documents show how AI chatbots tackle child abuse

How fake nude images were created and shared

The plaintiff posted some photos of herself on social media when she was fourteen years old. A male classmate used an AI tool called ClothOff to remove her clothes from one of the pictures. The altered photo kept her face the same, making it look real.

The fake image spread quickly through group chats and social media. seventeen now, She is filing a lawsuit AI/Robotics Venture Strategy 3 Ltd., the company that runs Clothoff. Yale Law School professors, several students, and a trial attorney filed suit on her behalf.

social-media-scrolling New Jersey man sues AI company over fake nude images created by Clothoff

A New Jersey teenager is suing the makers of an AI tool that created fake nude images of her. (iStock)

The lawsuit asks the court to delete all fake images and stop the company from using them to train AI models. It also seeks removal of the tool from the Internet and financial compensation for emotional harm and loss of privacy.

Legal fight against deepfake abuse

States across the US are responding to the rise of AI-generated sexual content. More than 45 states have passed or proposed laws making it a crime to deepfake without consent. In New Jersey, creating or sharing misleading AI media can result in jail time and fines.

At the federal level, the Take It Down Act Companies must remove objectionable images within 48 hours of a valid request. Despite the new laws, prosecutors still face challenges when developers live overseas or operate through hidden platforms.

Apparent AI errors force two judges to reverse separate decisions

courtroom New Jersey man sues AI company over fake nude images created by Clothoff

The lawsuit aims to stop the proliferation of deepfake “clothing-removal” apps and protect victims’ privacy. (iStock)

Legal experts say the case could set a national precedent

Experts believe the case will change how courts view AI liability. Judges must decide whether AI developers are liable when people misuse their tools. They should also consider whether the software itself could be an instrument of harm.

The case highlights another question: How can victims prove damages when no physical act has occurred, but the harm seems real? The results could define how future deepfake victims seek justice.

Is Clothoff still available?

Reports suggest that Clothoff may no longer be accessible in some countries, such as the United Kingdom, where it was blocked after public backlash. However, users in other regions, including the US, still appear to be able to access the company’s web platform, which continues to advertise tools to “remove clothes from photos.”

On its official website, the company includes a short disclaimer addressing the ethics of its technology. It said, “Is it ethical to use AI generators to generate images? Using AI to generate ‘deepnude’ style images raises ethical considerations. We encourage users to approach this with an understanding of responsibility and respect for the privacy of others, ensuring that using undressing apps is done with full awareness of the ethical implications.”

Whether fully operational or partially restricted, clothoff’s online presence continues to raise serious legal and ethical questions about how far AI developers should go in allowing such image-manipulation tools to exist.

Click here to get the Fox News app

3-insurance-data-breach-exposes-sensitive-info-of-1.6-million-people-outro New Jersey man sues AI company over fake nude images created by Clothoff

The case could set a national precedent for holding AI companies accountable for misuse of their tools. (Kurt “Cyberguy” Knutson)

Why this AI case matters to everyone online

The ability to fake a nude image from a simple photo poses a threat to anyone with an online presence. Teenagers face particular risks because AI tools are easy to use and share. The lawsuit points to the emotional harm and humiliation caused by such images.

Parents and teachers worry about how fast this technology spreads through schools. Lawmakers are under pressure to modernize privacy laws. Companies hosting or enabling these tools must now consider robust security and rapid takedown systems.

What does this mean for you?

If you become the target of an AI-generated image, act quickly. Save screenshots, links and dates before content disappears. Request immediate removal from the website hosting the image. Get legal help to understand your rights under state and federal law.

Parents should openly discuss digital safety. Even innocent photos can be misused. Knowing how AI works helps teens stay alert and make safe online choices. You can also demand stricter AI regulations that prioritize consent and accountability.

Take my quiz: How secure is your online security?

Think your device and data are really protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized analysis of what you’re doing right and what needs improvement. Take my quiz here: Cyberguy.com.

Kurt’s highlights

This case is not just about a teenager. This marks a turning point in how courts deal with digital abuse. The case challenges the notion that AI tools are neutral and asks whether their creators share responsibility for harm. We must decide how to balance innovation and human rights. The court’s decision could influence how future AI laws are developed and how victims seek justice.

If an AI tool creates an image that destroys a person’s reputation, should the company that creates it face the same punishment as the person who shares it? Let us know by writing here Cyberguy.com.

Sign up for my free CyberGuy report
Get my best tech tips, urgent security notices and exclusive deals straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — for free when you join me CYBERGUY.COM Newspaper

Copyright 2025 CyberGuy.com. All rights reserved.

Share this content:

Post Comment