When our generation was growing up, we were taught from an early age to be careful about what we post online. We were taught never to share personal information or pictures with people we don’t know.
But as we got older, the lines blurred. In a world that increasingly relies on social media engagement and digital technology usage, it’s no longer a possibility to keep our personal information and pictures off the internet. Sure, you can set your Facebook or Instagram to private, but for many in the professional world, your information has to be available online so people can reach you. Some of us have jobs with companies that share our name, picture, and bio on their website. Aside from work, many share their opinions on TikTok with nothing but their first name and face. Certainly, that’s not enough information for online predators to target us, right?
Two years ago, my Instagram account was public. I thought this made sense since I knew that as a future journalist, I wanted people to know who I am, and I wanted anyone who read my work to be able to easily access me. But I quickly changed my account setting back to private after discovering a spam bot was using my (fully clothed, non-sexual) photos and name to trick people I knew into clicking suspicious links under the guise of “supporting my porn.”
I thought it couldn’t get much worse than that. But what if they had used my photos to actually make porn? The account was public, and it tried following all my friends and mutuals. It even started following student organizations that I and my friends were part of. If they had used my face and name and attached them to a naked body, how much worse would it have been?
Two years later, I have this sinking feeling that it’s only a matter of time until that does happen. Just days ago, AI-generated images of Taylor Swift being sexually assaulted began circling around Twitter (the platform Elon Musk insists on calling X). Swift is reportedly considering legal action, but there is no federal law banning deepfake or AI-generated pornography. She will likely not be able to seek criminal charges but may possibly be able to pursue a civil suit for misappropriation of her likeness.
But using one’s likeness for non-consensual sexual purposes is revenge porn, which is categorized as sexual harassment in many laws. Forty-eight states have laws criminalizing revenge porn, but only ten specifically outlaw AI-generated and deepfake porn. Of those ten, two don’t classify it as a criminal offense, only allowing victims to sue if they want justice.
If Taylor Swift, one of the most powerful women in the world, can’t protect herself from this issue, what are the rest of us to do? She is, after all, a Tennessee resident, which is not a state with anti-AI porn laws.
Due to this rising issue, a few new technologies have emerged to help prevent AI from generating images with your likeness. Researchers at the Massachusetts Institute of Technology and the University of Chicago have each created separate tools that add a layer of protection to photos, warping any AI-generated image that attempts to use them. But any image we’ve already posted will still be up for grabs, and it’s entirely possible for more advanced AI technology to come in the future that could override this.
But aside from this technology, there is virtually nothing we can do on our own. Compare this to revenge porn that was captured authentically. There are preventative steps we can take to protect ourselves from this type of harassment, like using two-factor authentication, sharing only with trusted partners, or simply abstaining from taking nude photos. But even if you don’t take pornographic pictures of yourself, all it takes to become a victim of AI-generated porn is to simply exist. Even if you’re a never-nude with a flip phone who has never used social media once, you’re at risk just by being around other people with the cameras we all constantly carry around. If the issue isn’t tackled by lawmakers soon, every single person on the planet is more at risk of becoming a sexual harassment victim than ever before.
The next best thing we have is changing the legislation around this issue and changing it fast. If revenge porn is a crime in 48 states, nonconsensually using a person’s likeness to create AI-generated porn should be too. If Swift takes legal action against the perpetrators and AI developers, it could set a nationwide precedent that would protect people from harmful technology uses.
Legislation often takes quite a long time to pass, however. Another systemic change that needs to happen in the meantime is more preventative measures from social media platforms. After the images of Swift got passed around, her name was unsearchable on Twitter for about 24 hours. But just spending a few minutes on Twitter and Instagram makes it evident that those apps are overrun with bots promoting God-knows-what. And not every person that will eventually become a victim of this type of harassment is as powerful or famous as Swift, so searches of their names can’t reasonably be blocked.
Currently, Swift can’t sue Twitter for the spread of the images because Section 230 of the Communications Decency Act provides immunity for internet content providers. Essentially, social media platforms are not liable if their users post illegal content (because if they were, no one would start a website or app with social features), but they do typically have terms of service that they make without interference from the government to keep those spaces safe.
Although this section of the law was meant to protect the First Amendment rights of internet users, it has often been used as a shield for giant tech corporations who might not do their due diligence in protecting users from harmful content. However, there have been some interpretive discrepancies in courts regarding whether an exception should be given for platforms that do not remove harmful content in a timely fashion. Former law professor Dillon White said in a TikTok that if Swift decides to also take legal action against Twitter, it could set an incredibly important precedent for the future of both AI and social media.
Remember that spam bot that used my likeness that I talked about earlier? Despite myself and my friends reporting the account multiple times a day for weeks, Instagram never took it down. Seriously, I just checked; it’s still there. My name, first and last, and my face, insinuating that I make porn. For victims of AI-generated porn, it’s not even an insinuation; porn they never made is using their name and face. And if online platforms don’t remove this content, this will continue to happen.
Swift taking legal action could protect future victims by finally holding platforms like Instagram and Twitter accountable. Not only could she change laws regarding AI and revenge porn, but she could change the interpretation of one of the most important First Amendment-based laws since the dawn of the internet. What Taylor Swift does next could change everything. So, if you weren’t paying attention to Swift before, you should be now.
Comments