Deepfakes and Cheap Fakes – Issues with Terminology and the Incorrect Labelling of Deepfakes

In 2017, Motherboard journalist, Samantha Cole, was first to report on deepfakes in her article ‘AI-Assisted Fake Porn Is Here and We’re All Fucked’. Cole reported on a Reddit user called ‘deepfakes’ who had, in his words, ‘found a clever way to do face-swap’ using a ‘a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.’ Cole reported that Reddit user ‘deepfakes’ had created and posted on Reddit, fake pornographic videos of Gal Gadot, Scarlett Johansson, and Taylor Swift, among other celebrity women. Thus, marking the beginnings of what has turned into a serious, growing, and global problem – with women being disproportionately affected by this technology.

Data & Society Affiliates, Britt Paris and Joan Donovan, describe deepfakes as ‘a video that has been altered through some form of machine learning to “hybridize or generate human bodies and faces”‘. While Reddit has since banned deepfakes on its platform, prohibiting the ‘dissemination of images or video depicting any person in a state of nudity or engaged in any act of sexual conduct apparently created or posted without their permission, including depictions that have been faked’, Cole’s reporting sparked broader discussions about the implications of this technology, and what it means for distinguishing what is real or fake, and what it means for individuals who might be targeted by fake pornographic depictions of them (technology-facilitated abuse/image-based sexual abuse).

Since 2017, viral deepfakes have been created of Barack Obama, Tom Cruise, and the Queen, depicting them doing and saying things they did not say or do.

Law professor and deepfake scholar, Danielle Citron, and Robert Chesney from the University of Texas School of Law, wrote about the potential harmful implications of deepfakes for individuals and society. Citron and Chesney wrote that for individuals:

[t]here will be no shortage of harmful exploitations. Some will be in the nature of theft, such as stealing people’s identities to extract financial or some other benefit. Others will be in the nature of abuse, commandeering a person’s identity to harm them or individuals who care about them. And some will involve both dimensions, whether the person creating the fake so intended or not.’

Citron and Chesney also discussed the implications of deepfakes on distrorting democractic discourse, eroding trust, manipulating elections, and national security implications, among other things. Citron said in her TED talk entitled ‘How deepfakes undermine truth and threaten democracy’ that ‘technologists expect that with advancements in AI, soon it may be difficult if not impossible to tell the difference between a real video and a fake one.’

While deepfakes pose a threat to society and democracy, in a 2019 report entitled ‘The State of Deepfakes: Landscape, Threats, and Impact’ by Deeptrace (now Sensity), it was found that 96% of deepfakes are pornographic, and 100% of those pornographic deepfakes are of women. Perhaps unsurprisingly, given the origin story of ‘deepfakes’ and the way deepfakes were first used to create fake, non-consensual pornographic material, the human and gendered implications of deepfakes remain the most significant threat of this technology.

The incorrect labelling of deepfakes

There is a lot of discussion by deepfake experts and technologists about what actually constitutes a ‘deepfake’ and whether other forms of less advanced media manipulation would be considered deepfakes, such as ‘cheap fakes’, which require ‘less expertise and fewer technical resources‘, whereas deepfakes require ‘more expertise and technical resources‘. Cheap fakes is a term coined by Paris and Donovan as:

‘an AV manipulation created with cheaper, more accessible software (or, none at all). Cheap fakes can be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing.’

To experts, the term ‘deepfake’ is being used incorrectly and loosely to refer to any kind of manipulated/synthetic media even though the material in question is not technically a deepfake. Moreover, according to deepfake experts, media outlets have incorrectly labelled content as a deepfake when it is not.

The incorrect labelling of content as deepfakes, or indeed the incorrect labelling of less advanced manipulated material as deepfakes, has raised concerns with experts that it muddies the waters, and compromises our ability to accurately assess the threat of deepfakes. For example, Mikael Thalen, writer at the Daily Dot, reported on a case in the US about a ‘mother of a high school cheerleader‘ who was ‘accused of manipulating images and video in an effort to make it appear as if her daughter’s rivals were drinking, smoking, and posing nude.’ In Thalen’s article, it was reported that:

The Bucks County District Attorney’s Office, which charged [the mother] with three counts of cyber harassment of a child and three counts of harassment, referenced the term deepfake when discussing the images as well as a video that depicted one of the alleged victim’s vaping.’

According to Thalen’s reporting, ‘[e]xperts are raising doubts that artificial intelligence was used to create a video that police are calling a “deepfake” and is at the center of an ongoing legal battle playing out in Pennsylvania’. World-leading experts on deepfakes, Henry Adjer and Nina Schick, in relation to this situation, pointed out in a tweet by Schick that ‘the footage that was released by ABC wasn’t a #deepfake, and it was irresponsible of the media to report it as such. But really that’s the whole point. #Deepfake blur lines between what‘s real & fake, until we can’t recognise the former.’

My experience with deepfakes and cheap fakes

At the age of 18, I discovered anonymous sexual predators had been creating and sharing fake, doctored pornographic images of me and had been targeting me well before my discovery (since I was 17 years old). Over time, the perpetrators continued to create and share this fake content, which continued to proliferate on the internet, and became more and more graphic over time. The abuse I was experiencing, coupled with a number of other factors, including that there were no specific laws to deal with this issue at the time, eventually led me to speak out publicly about my experiences of altered intimate imagery in 2015/16 and help fight for law reform in Australia. After a few years, distributing altered intimate images and videos became criminalised across Australia (thanks to the collective efforts of academics, survivors, policy makers and law makers). However, during and after I had spoken out, and during and after the law reform changes across Australia, the perpetrators kept escalating their abuse. In 2018, I received an email that there was a ‘deepfake video of [me] on some porn sites’, I was later sent a link to an 11-second video depicting me engaged in sexual intercourse, and shortly after I discovered another fake video depicting me performing oral sex (the 11-second video has been verified by a deepfake expert to be a deepfake, and the fake video depicting me performing oral sex appears to be, according to a deepfake expert, a cheap fake or manual media manipulation. The altered and doctored images of me are not deepfakes).

Issues with terminology and the incorrect labelling of deepfakes

While I agree with deepfake experts and technologists that incorrectly labelling content as a deepfake is problematic, there are a number of issues I have with this terminology discussion and the incorrect labelling of deepfakes, which I set out below.

Before I discuss them, I should point out that deepfakes and cheap fakes have, and can be, created for a number of different purposes, for a number of different reasons, not simply as a way to carry out technology-facilitated abuse. However, given what we know about deepfakes, and how it is primarily used to create non-consensual pornographic content of women, these terms are inextricably linked to its impact on women, and therefore ought to be considered from that lens.

I should also point out that I am neither an academic, deepfake expert, nor a technologist, but I do have a vested interest in this issue given my own personal experiences, and I do believe that I have just as much of a right to include my perspectives on these issues, as these conversations should be as diverse as possible in the marketplace of ideas, and should not be limited in who can talk about them when it directly and disproportionately affects many women in society.

First, the term ‘deepfake’ itself is problematic, due to its highly problematic origin being used as a way to commit non-consensual technology-facilitated abuse. Why should we give credence to this particular term that has been proven to be weaponised primarily against women, and in so doing, immortalise the ‘legacy’ of a Reddit user whose actions have brought to the fore another way for other perpetrators to cause enormous amounts of harm to women across the world. I take issue with labelling AI-facilitated manipulated media as ‘deepfakes’, even though it’s the term the world knows of it by. There ought to be a broader discussion of changing our language and terminology of this technology because it does not adequately capture the primary way it has, and continues to be, used by perpetrators to carry about abuse against women.

Second, the term ‘cheap fake’ is also problematic to the extent that it might refer to material that has been manipulated – in less advanced ways than a deepfake – to create fake, intimate content of a person. Using the term ‘cheap fake’ to describe fake manipulated pornographic content of a person is potentially harmful as it undermines the harm that a potential victim could be experiencing. The word ‘cheap’ connotes something that is less valuable and less important, and that is particularly harmful in the context of people who might be a target of technology-facilitated abuse.

Third, I would argue that from a potential victim’s perspective, it is irrelevant that there even is a distinction between deepfakes and cheap fakes because a potential victim might still experience the same harm, regardless of the technology used to make the content.

Fourth, as AI is advancing, the capacity for laypeople (and even experts) to distinguish what is real or fake is likely going to be affected, including laypeople’s capacity to determine what is a deepfake or a cheap fake. While there is significant validity to experts casting doubt on the incorrect labelling of deepfakes, I worry that as deepfake technology advances, so too will a kind of ‘asymmetry of knowledge’ emerge between technologists and experts, who will be best placed to determine what is real or fake, and laypeople who may not be able to do so. This kind of ‘asymmetry of knowledge’ ought to be a primary consideration when casting doubt about deepfake claims, especially for those who might innocently claim something is a deepfake, whether that be a victim of technology-facilitated abuse or indeed, media organisations who report on it. Noting that innocently claiming something to be a deepfake can be distinguished from those who might intentionally claim something to be a deepfake when it isn’t, or vice versa – in which case, it would be important for experts to cast doubt on such claims.

In conclusion, there ought to be a broader discussion of our use of the terms ‘deepfakes’ and ‘cheap fakes’, given its origin and primary use case. There is also validity to experts casting doubt on claims of deepfakes, however, there are also important factors that ought to be considered when doing so, including its relevance, and whether claims were made innocently or not.

Australia: Criminalise ‘Image-Based Sexual Abuse’

Australia needs to criminalise ‘image-based sexual abuse’. This includes:

  • revenge porn – the non-consensual sharing of intimate images;
  • morphed porn – the non-consensual doctoring of ordinary images into pornographic material; and
  • parasite porn – the non-consensual sharing of ordinary images onto pornographic websites.

Online sexual exploitation can happen to anyone but it primarily affects women. It is used as a tool by perpetrators to harm, intimidate, control, threaten, misrepresent or sexually objectify their victims. Technology-facilitated abuse can cause significant harm to victims including emotional distress, violation, shame, humiliation, damage to their reputation and employability and disruption to their employment or education. Victims can fear for their safety and have suicidal thoughts and/or attempt suicide.

A national inquiry on ‘revenge porn’ has already taken place, and in early 2016 the Senate Legal and Constitutional Affairs Committee recommended in a report that the Commonwealth Government and the states/territories make the ‘non-consensual sharing of intimate images’ a criminal offence. The Commonwealth and the majority of our states/territories are yet to enact such laws or any laws that specifically tackle sexual cybercrime in its various forms. (Seriously Australia, the US, UK, Wales, Canada and New Zealand are already on top of it)  

Despite the Committee’s recommendations, the Federal Government has shifted its focus to civil penalties, in part due to the distressing and slow nature of criminal proceedings. A move which raises significant concerns because pursuing civil actions are arguably the most costly, lengthy, inaccessible and emotionally taxing features of our entire legal system. The criminalisation of ‘image-based sexual abuse’ would not only provide justice for victims but would serve as a powerful deterrent.

Whilst there are challenges in enforcing laws in this area, such as matters of jurisdiction, the potential anonymity of perpetrators and the rapid dissemination of online material. The Commonwealth does have the tools to fight sexual cybercrime through empowering government agencies such as the recently expanded Office of the eSafety Commissioner, the AFP, and working with internet and social media providers. 

Federal Government – A new reporting tool won’t be enough, please criminalise ‘image-based sexual abuse’.

PLEASE SIGN OR SHARE THIS PETITION to send a message to the Australian Government, the Minister for Communications and the states/territories of Australia, that online sexual exploitation ought to be criminally sanctioned, that we want justice for victims, that we want the criminalisation of ‘image-based sexual assault’ #peoplepower

 

Pornographic Cybercrimes: Does the Law Protect Your Personal Privacy?

It seems like every other day we are hearing stories in the news of young girls taking their lives because their nude photos were plastered online without consent. We hear about celebrity nude photo hacks. We hear about government ministers in the Northern Territory embroiled in revenge porn scandals. More recently, the target has hit closer to home, with school students from over 70 Australian schools caught in a pornography ring, featuring thousands of non-consensual sexually explicit images of young girls.

The news is dominated by instances of ‘revenge porn,’ that is, the distribution of sexually explicit or intimate images of another person without consent, usually by ex-lovers. But we rarely hear about ‘parasite porn’ which is when ordinary images are taken from a person’s social media site and posted on threads in a pornographic site, usually alongside offensive and objectifying comments. In other words, you might not have taken a single sexually explicit photo of yourself- but you’re still not immune from being the target of sexual cybercrime.

We also rarely hear about ‘morphed porn’ where ordinary images are manipulated and superimposed on naked bodies and posted on porn sites. The bottom line is that in today’s age of technology, while revenge porn may be on the rise, it is not the only issue compromising our personal privacy.

There has been some talk that law-makers should re-name and categorise revenge porn, parasite porn and morphed porn into what is known as image-based sexual assault. I’d argue that the categorisation of ‘image-based sexual assault’ is preferable as it would encompass a broader range of sexual cybercrimes.

It is very important to know, that while young women are the primary targets of such invasions of privacy, anyone can fall victim to sexual cybercrime – yes, even males. I know of one case, where a guy had dressed up as an animal for a costume party,  later to find it on one of those furry fetish porn sites.

So what laws, if any, are in place to protect our personal privacy in this digital age?

Well, the law has not caught up with advancements in technology and unfortunately Australia is yet to criminalise revenge porn. There are only two state jurisdictions, South Australia and Victoria that have implemented revenge porn legislation. For example, in Victoria it is an offence, punishable by up to two years imprisonment, to maliciously distribute, or threaten to distribute, intimate images without consent. However, these criminal provisions have been criticised for being too ‘weak’ a punishment for perpetrators and too ‘broad’ in scope to capture the harm caused by revenge porn.

Since the majority of Australian states have not criminalised revenge porn, victims have to predominantly rely on civil actions to seek redress for invasions of personal privacy, possibly copyright or defamation proceedings. However, contrary to popular opinion, a general tort protecting personal privacy does not exist in Australia. As such, courts have tried to fit cases involving circumstances of ‘revenge porn’ into existing causes of action. As a result, what we have ended up with is a quasi-privacy tort, namely an equitable action for breach of confidence that was set out in the notable personal privacy case of Giller v Procopets.

The recent case of Wilson v Ferguson applied the principles set out in Giller v Procopets and relied on an action for breach of confidence in circumstances of ‘revenge porn.’ In this case, Ferguson and Wilson were involved in sexual relations and shared sexually explicit photos and videos of each other during their relationship. When the relationship ended Ferguson posted the intimate photos of Wilson to Facebook for public viewing without consent. Wilson was left severely emotionally distressed.

But is this quasi-privacy tort effective in dealing with the rise of revenge porn?

Firstly, this quasi-privacy protection fails to effectively punish perpetrators, and deter future incidence of sexual cybercrimes. Given that the harms felt by victims of sexual cybercrime are significant: as victims are more vulnerable to suicide; others experience stalking, depression, emotional distress and humiliation; for some it has affected their employability and others have lost their jobs. Is it really enough to simply award an injunction and provide monetary compensation to victims under this quasi-privacy protection?

Such harms warrant the criminalisation of revenge porn and the imprisonment of perpetrators. Criminalising revenge porn would serve to provide stronger punishments to perpetrators and would deter future incidence of sexual cybercrimes.

Additionally, this quasi-privacy protection in Australia fails to provide adequate justice for victims. It is somewhat paradoxical that civil actions intended to protect our personal privacy, doesn’t necessarily achieve this outcome- because an action for breach of confidence means that victims may not remain anonymous, unlike the protection that criminal prosecution affords. In fact, victims may be reluctant to seek civil redress because it is extremely timely, costly and emotionally taxing for already vulnerable victims and may increase publicity of the photos.

But even if Australia’s laws were to change – there are inherent problems for lawmakers in addressing these issues due to the nature of the digital landscape:

  1. There are difficulties in enforcement and punishing perpetrators, especially where sites are run outside of Australia.
  2. Once an image is online it can be very hard to remove because images can be shared instantaneously all over the internet and before the law can step in much of the damage is already felt by the victim.
  3. There are difficulties in detecting intimate photos as quite often victims are not aware that their intimate photos have been posted online and by the time the victims become aware that their intimate photos have been posted, the images have gone viral making its removal near impossible.

In America, the situation is quite different. Already around 34 states have revenge porn legislation. Most revenge porn legislation in America is based on the New Jersey or the Californian models, both differ significantly. For example, in New Jersey, it is a crime, punishable by up to 5 years’ imprisonment, to disclose any photograph, film, videotape… of another person whose intimate parts are exposed or who is engaged in a sexual act without consent. Unlike New Jersey, California’s revenge porn law requires there be an intent to cause serious emotional distress and that the depicted person suffers serious emotional distress.

For Australia, all hope is not lost. In late 2015, Tim Watts MP introduced a Private Members’ Bill in the House of Representatives that would criminalise revenge porn, although it wasn’t passed into law. In March 2016, the NSW Legislative Council Standing Committee on Law and Justice released a report on serious invasions of privacy and on September 5 2016, NSW Attorney-General Gabrielle Upton announced that the NSW Government will seek to criminalise revenge porn.

However, deciding to criminalise revenge porn is just one step in dealing with this issue. For NSW and the rest of Australia, questions arise as to what this new law would prescribe: Would the penalties be stronger than two years’ imprisonment as set out Victoria and South Australia or closer to 5 years like the American models? How will it try to reconcile the inherent problems of enforcement and the removal and detection of photos? Will this new law also capture instances of ‘parasite porn’ or ‘morphed porn?

So, how do you find out if you’re the victim of sexual cybercrime? A simple Google Image Reverse Search is a start to see if any of your photos are anywhere on the internet. If, however, you find that there are images of yourself on pornographic sites without your consent- Google now allows you to request the removal of photos or videos on Google search results. We’ve waited a long time for revenge porn legislation but at least now the future is looking promising for Australia.

 

 

 

Written by Noelle Martin. A version of this article has previously been published in The Brief- The Macquarie University Law Society Publication.

A Cautionary Tale of Sexual Cybercrime: The Fight to Reclaim my Name

This is a cautionary tale of my experiences as a victim of sexual cybercrime. I’m filled with fear, hesitancy and an overwhelming sense of vulnerability at the prospect of writing this piece. I’ve written a little about my experiences before but never as candid as what is to follow. This time around, I’m fighting to reclaim my name and image, a name and image that has been stolen from me and has depicted me as something I’m not.

So here goes…

It all started a couple of years ago when I discovered through a simple Google Image Reverse search that dozens of photos from my social media were plastered all over pornographic sites: xhamster.com, sex.com, cumonprintedpics.com, motherless.com, titsintops.com you name it.

But let me make one thing clear, none of my photos are or were sexually explicit, they were just ordinary images of myself, that like everyone else my age, and everyone else in today’s internet culture, would post on social media.

pic-7
Photo of me taken at age 17

It’s my understanding after years of dealing with this issue that the picture to the right is the one that started it all, or caught the attention of some pervert out there.

Somehow the perverts responsible had also managed to find out all of my details, which were also posted on these porn sites. My name, where I lived, what I studied- Some people on the thread were even trying to find out the name of my childhood best friend, so they could hack into my Facebook.

What’s more, is that on these pornographic sites were extremely explicit and highly offensive comments about myself that are to this day branded in my mind: ‘Cover her face, and I’d fuck her body,’ and ‘the amount of cum that has been spilt over her could fill a swimming pool.’ I was also called a ‘whale.’

The discovery was traumatising. I was frightened that a perpetrator would try and contact me in person. It was brutal. I immediately went to the police station, but this was before all this exposure to ‘revenge porn’ was dominating discussion in society. The police had told me that essentially there was nothing they could do, as there was nothing illegal going on, because once you upload a photo to Facebook anyone can take it and do anything they want with it, and that I had to contact the websites myself to take them down and just ensure that my social media settings were set to private.

I know now that what was happening to me is called ‘parasite porn’- the term used when ordinary images are taken from a person’s social media site and posted on threads in pornographic sites, usually alongside highly offensive, explicit and objectifying comments.

I also know that there are so many more young women who are victims of ‘parasite porn’ but haven’t a clue and all the while being preyed on by perverted men. The screenshot below is taken from just one website:

pic-3
As you can see, some young women from Instagram are being preyed upon.

For these perverted men, they might argue that what they’re doing may be questionable but technically they aren’t breaking any laws or rules. Unfortunately, they would be right. Under Facebook’s Statement of Rights and Responsibilities, ‘When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you.’

Perpetrators of ‘parasite porn’ might not be breaking any rules or laws right now. But it’s not far-fetched to imagine that at some point in the future, society does witness the rise in the incidence of ‘parasite porn,’ and we ask ourselves: how are we allowing this? Is it really okay for others to do anything they want with an image they find online even if it means objectifying, sexualising and preying on the victim? Is this the risk young women have to take to have an online presence? How will we deal with this issue?

So while ‘parasite porn’ might not break any rules or laws, what it does do-is open up the floodgates to an even crazier world. The world of ‘morphed porn’- where ordinary images are manipulated and superimposed on naked bodies or edited to create a more sexualised effect, and posted on porn sites.

This is where my story takes a turn for the worst…

I soon learnt that my face was being photoshopped onto naked women and I was being depicted as an adult actress. Some solo, some with other porn stars and in one image I’m being ejaculated on by two men. Today, Photoshop is so advanced that it’s really not that difficult to morph an image and make it look real- and some of mine do, which has been the cause of so many sleepless nights worrying about my future employability.

pic-888The newest morphed image is me photoshopped me onto the cover of porn film, ‘Buttman’s Big Tit Adventure Starring Noelle Martin and 38G monsters’ it says.

From the initial discovery and throughout this process, I contacted all the relevant government agencies and even the Australian Federal Police. I explained my story numerous times but I was always transferred or directed to the next agency or simply not responded to.

So I just had to take matters into my own hands. I frantically went about getting the websites removed with varying degrees of success. Luckily most sites obliged my request for deletion. Until one particular site, the site containing the ‘morphed images.’ I had sternly requested this site be deleted, but the Webmaster refused to do so unless I sent him intimate images of me. When I of course refused and demanded the page be removed, he threatened to send the photos to my university and my father. I knew better than to give into blackmail, so I held strong, but the site wasn’t deleted until much later.

Yet again, I know there are so many girls who literally don’t know about this- it’s a terrifying prospect. The screenshot to the right is from just one site.new

Now, some of you may be thinking that I should’ve just had my photo settings on private, or that I shouldn’t upload ‘risqué’ photos, or that I should just quit social media forever.

I thought the same for a long time, I was filled with shame, embarrassment and disappointment. But I’ve come to terms with the fact that I shouldn’t be ashamed at all. I haven’t done anything wrong. Like many others, I’m just another victim of sexual cybercrime.

In fact, now I would say that firstly, no matter how careful you are with your privacy settings on social media. There are always ways around it. These perverts can and do look through photos in the club taken by the club photographer, events pages and even your friends’ accounts

Secondly, blaming the victim is the easy option, especially in this culture of victim-blaming. Where victims of ‘revenge porn’ are asked why they sent nude photos in the first place, instead of why the boys posted them online. We should be asking why these perverted men aren’t being held to account for their actions and for the harm they have not only caused me, but all the other victims subjected to sexual cybercrime.

Lastly, while it may be common knowledge that the internet is a dangerous place and we should all be careful about what we put on the internet, NOBODY expects that when they upload a photo onto Instagram or Facebook, that they’ll end up being depicted as adult actress, with their name and image smeared and misrepresented in a sexually explicit and highly offensive way.

Today, the media is dominated by news of ‘revenge porn.’ We know about the harms of revenge porn to victims that they are more vulnerable to suicide, depression, emotional distress, humiliation and the list goes on.What we don’t hear are the issues of ‘parasite porn’ and ‘morphed porn,’ maybe because most of the victims don’t know they’re victims, which is terrifying enough. But an even more terrifying prospect is that you don’t need to have taken or sent a sexually explicit photo to be at risk.

If you discover that you’re also a victim of ‘parasite porn’ or ‘morphed porn,’ there’s hope still. Now, Google allows you to request the removal of certain photos and videos posted without consent from Google Search Results.

Befitting it seems, how relevant the words of Brené Brown are, the world’s most renowned researcher in shame and vulnerability:

When we deny the story, it defines us. When we own the story, we can write a brave new ending.

So here I am, reclaiming my name.

 

 

Featured Image: Zac Quitzau Facebook: Zac’s Doodles

 

 

 

 

The Unknowing Victims: Revenge Porn, Parasite Porn and Morphed Porn

It was 2am on a Saturday night and unlike my typical weekend of wild partying, I was enjoying a quiet night in. Little did I know that what started off as aimlessly browsing the web, would turn into an even wilder, unexpected three year ordeal that to this day is far from over. That night, my mind turned to something I heard about earlier that day, the Google Image Reverse function, that allows you to upload a picture and find out if and where it is on the Internet. So, out of pure curiosity, I decided to try it for myself. What I discovered made my body ache, like the feeling you get from hearing troubling news, but a million times worse. Dozens upon dozens of porn sites had my face featured on their pages, from xhamster to sex.com and many more.

As I opened up the sites, one at a time, I found galleries of my photos, many had been taken from my Facebook page or my friends’ Facebook pages or from nightclub albums.  My details were listed- name, age, location and what I studied. The comments that were left about me are too explicit and offensive to repeat. “Parasite Porn” is the phrase used to describe the posting of photos stolen from social media websites and repurposed for pornographic purposes.

I called the police that night and I was told to bring screenshots of the sites to the closest police station. Monday morning came and I travelled to Eastwood Police Station with my laptop in hand. There was nothing they could do.  They informed me that once a photo has been posted online anyone can use and do whatever they want with it, even if it meant misrepresenting you on porn sites. I would like to note that none of the photos I had were sexually explicit. I pleaded with the police saying that the lack of consent must make this illegal, but all I could do was to contact the sites myself and request to take them down. I have successfully taken down many sites since, but I am still in the process of removing numerous sites which feature morphed photos of me, that is, photos of me that have been manipulated into a sexually explicit nature, this is an example of “morphed pornography.”

My personal experience is not an isolated one. In a society that is dominated by a vast, under regulated Internet and social media culture, where photo sharing is a commonplace activity, many victims of this sort of cybercrime, predominantly women, find themselves vulnerable and without adequate recourse. Some victims may be able to afford the expenses to remedy the damage, such as lawyers and private investigators. But there is no remedy for the emotional damage to the spirit and dignity of these victims. All too often these crimes are occurring, from the recent celebrity nude photo scandal, where many female A-list celebrities like Jenifer Lawrence had their iCloud accounts hacked and nude photos disclosed, to the very recent incident surrounding 500 Adelaide women who were found to be victims of “revenge porn,” the public sharing of sexually explicit material, often by ex-lovers, for the purposes of humiliation.

It is easy to adopt the view that if you don’t want your photos to end up in the wrong hands, don’t upload them, or if you do upload them you should expect this kind of backlash. This very sentiment was echoed by Sunrise, when in response to the Adelaide revenge porn incident they stated “when will women learn?” not to take such photos.  The comments made by Sunrise sparked outrage when feminist writer Clementine Ford called Sunrise out for “victim-shaming,” when the in fact, these women were merely “victims of crime.”

Repairing the damage done by cybercrime can be very difficult; there are issues of jurisdiction if some websites are outside the Australian domain, not to mention the adverse effect on the victim’s employability and reputation, as she may likely become susceptible to slut-shaming. In my opinion, the most dangerous part of all, is the fact that many victims of crimes such as these, have no idea they are victims until the damage may is irreparable.

Luckily, all is not so grim. Amit Singhal, a senior vice-president of Google has announced that they would remove from their search results, nude or sexually explicit photos that were uploaded without the victim’s consent, although the photos will still appear on the original websites. Whilst Google’s course of action will not solve the issue at hand, it is a huge step in tackling this kind of cybercrime. Numerous states in the US have seen the emergence of revenge porn legislation. Australia is yet to follow suit.

Featured Image: Zac Quitzau Fb: Zac’s Doodles