Meta has filed a lawsuit against a company that ran ads for apps that use artificial intelligence to remove clothes from photos without permission. This legal move follows an investigation by CBS News that uncovered hundreds of ads for these so-called nudify apps on Facebook, Instagram Messenger, and Threads. Meta hopes the court action will help stop these exploitative tools from reaching its users.
Legal action against Joy Timeline
Meta’s lawsuit targets Hong Kong-based Joy Timeline and its CrushAI apps. The suit says the company tried many times to slip ads past Meta’s review system. Meta wants to block Joy Timeline from advertising on its platforms. A Meta spokesman said the company will keep using all steps available to protect its community from abuse.
Last week, CBS News showed how ads for nudify apps kept appearing even after Meta removed others. Meta said it took down many ads, deleted offending accounts, and blocked links to the apps. But the bad actors found new ways to hide their ads. Meta wants the lawsuit to send a clear message that this kind of behavior will not be allowed to hide behind AI.

What are nudify apps
Nudify apps use generative AI to turn regular photos into ones that show a person without clothes. These tools often target women or female celebrities, but can be used on anyone. Users simply upload an image and the app creates a new picture. The apps then sell or share these images without the subject’s consent.
The CBS investigation found apps listed on app stores and advertised on social media. Some ads told men that these tools could let them see anyone naked. Experts warn this can fuel blackmail, sextortion, and other crimes. The apps can also fall into the hands of children who do not understand the harm they cause.
Why it matters now
Digital undressing apps violate Meta’s rules on sexual content, harassment, and nonconsensual imagery. Such images can damage reputations, harm mental health, and lead to real-world threats. Users who find their pictures undressed by AI can face shame, fear, and even illegal threats.
Public outcry over these ads has grown. In April, 404 Media exposed similar apps on Instagram, and Apple and Google later removed many from their stores. The city of San Francisco sued the top deepfake sites last year. Now, Meta joins these efforts to shut down the supply of tools that dehumanize people.
Meta’s broader safety push
Meta has added new policies and AI tools to spot exploitative ads. The company now flags ads with certain keywords, blocks suspicious links, and uses machine learning to hunt down repeat offenders. Meta also warns users when content may break its rules and invites reports of any bad behavior.
The lawsuit is part of Meta’s wider aim to make its platforms safer. The company says it is working with regulators and law enforcement to fight nonconsensual deepfake tools. Meta will continue to update its ad review process and may bring more lawsuits if needed.

What comes next
Joy Timeline will have to defend itself in court. If Meta wins, it could force the company to stop its ads and pay damages. The legal fight may also help set new ground rules for how AI apps can be advertised online.
Users can help by reporting any ads or posts that promote nonconsensual AI tools. Staying alert and speaking out is one way to protect friends, family, and strangers from deepfake harm. As AI grows more powerful, it will be up to companies, users, and lawmakers to keep people safe and maintain trust in digital spaces.