NEW YORK (AP) – Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms, or help design advertising campaigns.
But experts fear that the darker side of easily accessible tools could make something worse that primarily harms women: non-consensual deepfake pornography.
Deepfakes are videos and images that have been created digitally or modified with artificial intelligence or machine learning. Porn created using the technology started spreading across the internet several years ago when a Reddit user shared clips that put the faces of female celebrities on the shoulders of porn actors.
Since then, the creators of deepfakes have been spreading similar videos and images targeting online influencers, journalists and others with public profiles. There are thousands of videos on a plethora of websites. And some have offered users the opportunity to create their own images, essentially allowing anyone to turn anyone they want into sexual fantasies without their consent, or use technology to harm ex-partners.
The problem, experts say, has grown as it has become easier to make sophisticated and visually appealing deepfakes. And they say it could get worse with the development of generative AI tools that get trained on billions of images from the internet and spit out new content using existing data.
“The reality is that technology will continue to proliferate, continue to develop, and continue to become as easy as pushing a button,” said Adam Dodge, founder of EndTAB, a group that offers technology-enabled abuse training. “And as long as that happens, people will no doubt … continue to misuse that technology to harm others, primarily through online sexual assault, deepfake pornography, and fake nude images.”
Noelle Martin, of Perth, Australia, experienced this reality. The 28-year-old herself found deepfake porn 10 years ago when out of curiosity one day she Googled an image of herself. To this day, Martin says she does not know who created the fake pictures or videos of sexual intercourse of her that she later found. She suspects that someone probably took a photo of her posted on her social media page or elsewhere and turned it into porn.
Horrified, Martin contacted several websites over several years in an attempt to remove the images. Some have not responded. Others took it off but she soon found it again.
“You can’t win,” Martin said. “This is something that will always be out there. It’s just like it’s ruined you forever.”
The more he spoke, the more the problem worsened. Some people have even told her that the way she dressed and posted the images on social media contributed to the harassment of her, essentially blaming her for the images instead of the creators.
Eventually, Martin turned his attention to the legislation, backing a national law in Australia that would fine companies AUD555,000 ($370,706) if they fail to comply with takedown notices for such content from online safety regulators.
But governing the internet is nearly impossible when countries have their own laws for content that is sometimes made on the other side of the world. Martin, currently a lawyer and legal researcher at the University of Western Australia, says she believes the problem needs to be controlled through some sort of global solution.
Meanwhile, some AI models say they’re already restricting access to explicit images.
OpenAI says it has removed explicit content from the data it uses to train the DALL-E image-generation tool, which limits users’ ability to create those types of images. The company also filters requests and says it prevents users from creating AI images of prominent celebrities and politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to report problematic images to moderators.
Meanwhile, startup Stability AI rolled out an update in November that removes the ability to create explicit images using its Stable Diffusion image generator. Those changes came following reports that some users were creating celebrity-inspired nude images using the technology.
Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques such as image recognition to detect nudity and return a blurry image. But it is possible for users to manipulate the software and generate whatever they want since the company releases its code to the public. Bishara said the Stability AI license “extends to third-party applications based on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”
Some social media companies have also tightened their rules to better protect their platforms from harmful materials.
TikTok said last month that all deepfakes or manipulated content showing realistic scenes must be labeled to indicate they are fake or doctored in any way, and that deepfakes of private and young people are no longer allowed. The company previously banned sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.
Gaming platform Twitch also recently updated its explicit deepfake imagery policies after a popular streamer named Atrioc was discovered to have a deepfake porn website open in his browser during a live stream in late January. The site featured fake images of other Twitch streamers.
Twitch has previously banned explicit deepfakes, but now showing a glimpse of such content — even if it’s meant to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating, or sharing the material is grounds for immediate prohibition.
Other companies have also tried to ban deepfakes from their platforms, but keeping them away takes diligence.
Apple and Google recently said they removed an app from their app stores that streamed sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn isn’t prevalent, but a report released in 2019 by AI firm DeepTrace Labs found that it was almost entirely weaponized against women, and the people most targeted were Western actresses, followed by singers K- South Korean pop.
The same app removed by Google and Apple had been running ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement that the company’s policy restricts both AI-generated and non-AI-generated adult content and restricted the app’s page from advertising on its websites. platforms.
In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit pictures and videos of themselves from the internet. The reporting site works for regular images and AI-generated content, which has become a growing concern for child safety groups.
“When people ask our senior management what are the boulders coming down the hill that we are concerned about? The first is end-to-end encryption and what that means for child protection. And then the second is artificial intelligence and deepfakes in particular,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which runs the Take It Down tool.
“We haven’t been able to formulate a straight answer yet,” Portnoy said.