Deepfakes and Cheap Fakes – Issues with Terminology and the Incorrect Labelling of Deepfakes

In 2017, Motherboard journalist, Samantha Cole, was first to report on deepfakes in her article ‘AI-Assisted Fake Porn Is Here and We’re All Fucked’. Cole reported on a Reddit user called ‘deepfakes’ who had, in his words, ‘found a clever way to do face-swap’ using a ‘a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.’ Cole reported that Reddit user ‘deepfakes’ had created and posted on Reddit, fake pornographic videos of Gal Gadot, Scarlett Johansson, and Taylor Swift, among other celebrity women. Thus, marking the beginnings of what has turned into a serious, growing, and global problem – with women being disproportionately affected by this technology.

Data & Society Affiliates, Britt Paris and Joan Donovan, describe deepfakes as ‘a video that has been altered through some form of machine learning to “hybridize or generate human bodies and faces”‘. While Reddit has since banned deepfakes on its platform, prohibiting the ‘dissemination of images or video depicting any person in a state of nudity or engaged in any act of sexual conduct apparently created or posted without their permission, including depictions that have been faked’, Cole’s reporting sparked broader discussions about the implications of this technology, and what it means for distinguishing what is real or fake, and what it means for individuals who might be targeted by fake pornographic depictions of them (technology-facilitated abuse/image-based sexual abuse).

Since 2017, viral deepfakes have been created of Barack Obama, Tom Cruise, and the Queen, depicting them doing and saying things they did not say or do.

Law professor and deepfake scholar, Danielle Citron, and Robert Chesney from the University of Texas School of Law, wrote about the potential harmful implications of deepfakes for individuals and society. Citron and Chesney wrote that for individuals:

[t]here will be no shortage of harmful exploitations. Some will be in the nature of theft, such as stealing people’s identities to extract financial or some other benefit. Others will be in the nature of abuse, commandeering a person’s identity to harm them or individuals who care about them. And some will involve both dimensions, whether the person creating the fake so intended or not.’

Citron and Chesney also discussed the implications of deepfakes on distrorting democractic discourse, eroding trust, manipulating elections, and national security implications, among other things. Citron said in her TED talk entitled ‘How deepfakes undermine truth and threaten democracy’ that ‘technologists expect that with advancements in AI, soon it may be difficult if not impossible to tell the difference between a real video and a fake one.’

While deepfakes pose a threat to society and democracy, in a 2019 report entitled ‘The State of Deepfakes: Landscape, Threats, and Impact’ by Deeptrace (now Sensity), it was found that 96% of deepfakes are pornographic, and 100% of those pornographic deepfakes are of women. Perhaps unsurprisingly, given the origin story of ‘deepfakes’ and the way deepfakes were first used to create fake, non-consensual pornographic material, the human and gendered implications of deepfakes remain the most significant threat of this technology.

The incorrect labelling of deepfakes

There is a lot of discussion by deepfake experts and technologists about what actually constitutes a ‘deepfake’ and whether other forms of less advanced media manipulation would be considered deepfakes, such as ‘cheap fakes’, which require ‘less expertise and fewer technical resources‘, whereas deepfakes require ‘more expertise and technical resources‘. Cheap fakes is a term coined by Paris and Donovan as:

‘an AV manipulation created with cheaper, more accessible software (or, none at all). Cheap fakes can be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing.’

To experts, the term ‘deepfake’ is being used incorrectly and loosely to refer to any kind of manipulated/synthetic media even though the material in question is not technically a deepfake. Moreover, according to deepfake experts, media outlets have incorrectly labelled content as a deepfake when it is not.

The incorrect labelling of content as deepfakes, or indeed the incorrect labelling of less advanced manipulated material as deepfakes, has raised concerns with experts that it muddies the waters, and compromises our ability to accurately assess the threat of deepfakes. For example, Mikael Thalen, writer at the Daily Dot, reported on a case in the US about a ‘mother of a high school cheerleader‘ who was ‘accused of manipulating images and video in an effort to make it appear as if her daughter’s rivals were drinking, smoking, and posing nude.’ In Thalen’s article, it was reported that:

The Bucks County District Attorney’s Office, which charged [the mother] with three counts of cyber harassment of a child and three counts of harassment, referenced the term deepfake when discussing the images as well as a video that depicted one of the alleged victim’s vaping.’

According to Thalen’s reporting, ‘[e]xperts are raising doubts that artificial intelligence was used to create a video that police are calling a “deepfake” and is at the center of an ongoing legal battle playing out in Pennsylvania’. World-leading experts on deepfakes, Henry Adjer and Nina Schick, in relation to this situation, pointed out in a tweet by Schick that ‘the footage that was released by ABC wasn’t a #deepfake, and it was irresponsible of the media to report it as such. But really that’s the whole point. #Deepfake blur lines between what‘s real & fake, until we can’t recognise the former.’

My experience with deepfakes and cheap fakes

At the age of 18, I discovered anonymous sexual predators had been creating and sharing fake, doctored pornographic images of me and had been targeting me well before my discovery (since I was 17 years old). Over time, the perpetrators continued to create and share this fake content, which continued to proliferate on the internet, and became more and more graphic over time. The abuse I was experiencing, coupled with a number of other factors, including that there were no specific laws to deal with this issue at the time, eventually led me to speak out publicly about my experiences of altered intimate imagery in 2015/16 and help fight for law reform in Australia. After a few years, distributing altered intimate images and videos became criminalised across Australia (thanks to the collective efforts of academics, survivors, policy makers and law makers). However, during and after I had spoken out, and during and after the law reform changes across Australia, the perpetrators kept escalating their abuse. In 2018, I received an email that there was a ‘deepfake video of [me] on some porn sites’, I was later sent a link to an 11-second video depicting me engaged in sexual intercourse, and shortly after I discovered another fake video depicting me performing oral sex (the 11-second video has been verified by a deepfake expert to be a deepfake, and the fake video depicting me performing oral sex appears to be, according to a deepfake expert, a cheap fake or manual media manipulation. The altered and doctored images of me are not deepfakes).

Issues with terminology and the incorrect labelling of deepfakes

While I agree with deepfake experts and technologists that incorrectly labelling content as a deepfake is problematic, there are a number of issues I have with this terminology discussion and the incorrect labelling of deepfakes, which I set out below.

Before I discuss them, I should point out that deepfakes and cheap fakes have, and can be, created for a number of different purposes, for a number of different reasons, not simply as a way to carry out technology-facilitated abuse. However, given what we know about deepfakes, and how it is primarily used to create non-consensual pornographic content of women, these terms are inextricably linked to its impact on women, and therefore ought to be considered from that lens.

I should also point out that I am neither an academic, deepfake expert, nor a technologist, but I do have a vested interest in this issue given my own personal experiences, and I do believe that I have just as much of a right to include my perspectives on these issues, as these conversations should be as diverse as possible in the marketplace of ideas, and should not be limited in who can talk about them when it directly and disproportionately affects many women in society.

First, the term ‘deepfake’ itself is problematic, due to its highly problematic origin being used as a way to commit non-consensual technology-facilitated abuse. Why should we give credence to this particular term that has been proven to be weaponised primarily against women, and in so doing, immortalise the ‘legacy’ of a Reddit user whose actions have brought to the fore another way for other perpetrators to cause enormous amounts of harm to women across the world. I take issue with labelling AI-facilitated manipulated media as ‘deepfakes’, even though it’s the term the world knows of it by. There ought to be a broader discussion of changing our language and terminology of this technology because it does not adequately capture the primary way it has, and continues to be, used by perpetrators to carry about abuse against women.

Second, the term ‘cheap fake’ is also problematic to the extent that it might refer to material that has been manipulated – in less advanced ways than a deepfake – to create fake, intimate content of a person. Using the term ‘cheap fake’ to describe fake manipulated pornographic content of a person is potentially harmful as it undermines the harm that a potential victim could be experiencing. The word ‘cheap’ connotes something that is less valuable and less important, and that is particularly harmful in the context of people who might be a target of technology-facilitated abuse.

Third, I would argue that from a potential victim’s perspective, it is irrelevant that there even is a distinction between deepfakes and cheap fakes because a potential victim might still experience the same harm, regardless of the technology used to make the content.

Fourth, as AI is advancing, the capacity for laypeople (and even experts) to distinguish what is real or fake is likely going to be affected, including laypeople’s capacity to determine what is a deepfake or a cheap fake. While there is significant validity to experts casting doubt on the incorrect labelling of deepfakes, I worry that as deepfake technology advances, so too will a kind of ‘asymmetry of knowledge’ emerge between technologists and experts, who will be best placed to determine what is real or fake, and laypeople who may not be able to do so. This kind of ‘asymmetry of knowledge’ ought to be a primary consideration when casting doubt about deepfake claims, especially for those who might innocently claim something is a deepfake, whether that be a victim of technology-facilitated abuse or indeed, media organisations who report on it. Noting that innocently claiming something to be a deepfake can be distinguished from those who might intentionally claim something to be a deepfake when it isn’t, or vice versa – in which case, it would be important for experts to cast doubt on such claims.

In conclusion, there ought to be a broader discussion of our use of the terms ‘deepfakes’ and ‘cheap fakes’, given its origin and primary use case. There is also validity to experts casting doubt on claims of deepfakes, however, there are also important factors that ought to be considered when doing so, including its relevance, and whether claims were made innocently or not.

Take Up Space Unapologetically: Tackling Online Abuse

Learning about the tools and ways we can manage our privacy online is incredibly important in the digital age. We should all be equipped with the knowledge to make informed decisions about our own digital footprint. There are a myriad of reasons why people choose to be more private than public on social media, and vice versa.

However, I’m growing wary when general advice is given by online safety institutions encouraging people to manage, control or lock down their privacy settings on social media in order to ‘protect’ themselves from forms of online abuse, particularly image-based abuse, which this piece will focus on.

I argue that such advice may be necessary in specific circumstances, but is problematic as a general course of action because it:

  1. cannot guarantee individuals protection from online abuse;
  2. may mitigate the risk of abuse but often fails to manage victims’ expectations;
  3. shifts responsibility away from perpetrators;
  4. disproportionately disenfranchises certain groups and individuals;
  5. is a short-term fix with long-term consequences;
  6. screams victim blaming under the guise of protection;
  7. is not conducive to creating an online world in which we are all safe and free to express ourselves, let alone exist, without being abused; and
  8. fails to actually address the underlying problem at hand.

At the fundamental level there is no guarantee that one can completely protect themselves in the digital age from certain forms of online abuse, including image-based abuse.

Image-based abuse takes many forms from distributing, surreptitiously recording, or threatening to distribute or record intimate images/videos without consent. It includes non-consensually sharing altered intimate images/videos. In the digital age of ‘upskirting’ and ‘downblowsing’ people can be victimised without knowing it. Peoples images can be manipulated from a LinkedIn profile picture, altered into pornography and shared online. The reality is – some forms of online abuse occur beyond our control, even if we follow the advice of controlling or locking our privacy settings on social media.

The most compelling reason why it may be important or in fact, necessary to advise people to control or lock down privacy settings on social media in order to protect themselves from image-based sexual abuse, is that it may mitigate the risk of abuse occurring or continuing to occur, especially when victims may be in danger. Two points to make here:

First, when some online safety institutions encourage people to control their social media settings, it is not accompanied with the explanation that doing so just mitigates the risk of online abuse, as doing so will not guarantee protection from online abuse.

Failing to qualify statements and calls to lock down your social media, fails to adequately manage the expectations of victims and the public, and what’s more concerning is that it gives victims and the public a false sense of security that they are protecting themselves if they follow such advice.

Second, there are horrific cases in which a victim is in danger or is living in fear of the perpetrator/s. Cases where the abuse is relentless, merciless and unforgiving. Cases where the victim’s safety is of paramount importance and that means doing everything possible to try to keep the victim safe. As a survivor of image-based abuse there were times in my journey where I deactivated social media because the emotional distress was overwhelming. In such cases it may be necessary to encourage victims to manage their social media settings, as sad and unfair as it is. However, I believe such advice should be reserved for specific circumstances rather than a general course of action for the public.

Why? Because as a general course of action, even if it may mitigate the risk of online abuse it places the onus, burden and responsibility squarely on everyone except the perpetrator, it places it on us to protect ourselves from online abuse, when the only people who should be changing their behaviour are the perpetrators who are committing the abuse.

Now, you may be thinking, obviously its the perpetrators who should be the ones changing their behaviour, but there are ‘bad’ people in this world who are going to commit these abuses anyway. Common sense would dictate that an appropriate course of action would be to control or lock down our social media settings. 

While I hear you and understand what you are saying, I would still argue that the defensive approach to managing, controlling or locking down your social media settings is not going to work long-term and is not conducive to creating an online world in which we are all safe and free to express ourselves, let alone exist, without being misappropriated or abused. I’ll explain why shortly.

For now, let’s examine who would be the most affected by such general advice. We know that image-based abuse disproportionately affects certain groups in our society: young women, the LGBTQI community, people with disabilities, etc. So, when you make calls to people to control their social media settings, its these groups who would be the most receptive to such advice, and therefore be disproportionately affected by such advice.

We know that social media is used as an economic opportunity for people to build personal brands or grow businesses, its used as a platform to engage and contribute to social and political discourse, its used to connect with friends and family. Sometimes, using social media is necessary for work and career progression.

There are so many benefits to social media that you are disproportionately locking certain people out of by encouraging people to control or lock down social media settings, further disenfranchising certain groups and vulnerable individuals. It’s these groups who lose out the most from the cultural life of our times, leaving other demographics to dominate the social media landscape.

In the short-term, while generally encouraging or advising people to control or lock down their social media settings may mitigate the risk of abuse occurring, noting there is still no guarantee; in the long term, the consequences of such advice can adversely impact the very people you are trying to protect by impacting the configuration of online discourse that excludes the voices of certain groups and individuals, by socially isolating certain groups in our society, by disempowering and depriving people of economic opportunities, among other things.

I’d even go so far as to argue that encouraging people with general advice to manage, control or lock down their social media settings to protect themselves from online abuse is akin to telling people to lock themselves in their houses because the real world is full of dangers.

It’s well-meaning but it screams victim blaming under the guise of protection.

We see victim blaming all the time. It’s the kind of attitude that attacks and criticises the conduct of the victim, instead of the perpetrators of a crime. It’s the kind of attitude that shifts accountability and responsibility away from perpetrators and places it on the victim. It’s the sentiment that somehow the victim is at fault for the wrongdoings committed against them, or worse that the victim deserves the harm.

Victim blaming attitudes are rife in discussions of rape, image-based sexual abuse and family and domestic violence:

If she wasn’t wearing such revealing clothes she wouldn’t have been raped. If she didn’t send nude photos, he wouldn’t have uploaded them online. If she didn’t post “revealing” photos to social media, they wouldn’t be photo shopped into porn. If she was being abused at home she should’ve just left him.

Attitudes that shift responsibility away from perpetrators of crime are dangerous for so many reasons, but I believe the most concerning is that it is not conducive to creating an online world, let alone a world, in which we are safe to express ourselves, let alone exist, without being abused. To illustrate this, I’ll go back to a point made earlier, that essentially there are always going to be ‘bad’ people in this world who commit atrocities, so common sense would dictate that a good course of action is to control or lock down our social media settings. To which I would concede that you’re right, there are always going to be people who perpetrate harm onto others, but I fail to see how anything will stop if you keep advising people to control or lock down their social media settings in order to protect themselves from online abuse.

  • To what end are you advising people to do just that?
  • Are we just going to keep retreating while perpetrators may or may not be held accountable for their actions?
  • And even if we retreat by controlling our social media settings and perpetrators are also held accountable for their behaviour, we’re still the ones who lose out all round. 

If this path continues, I see no end. We’ll be stuck in a cycle where we are forever on the defensive, thereby fostering an online world of fear which makes space for perpetrators to our detriment. We can’t just stop living because there’s bad people out there. We can’t just be stuck in the house because there’s dangers in the real world, and we shouldn’t be missing out on fully participating in the online world because there are people who perpetrate online abuse. I say:

Take Up Space Unapologetically

Lastly, general advice encouraging people to manage, control or lock down their social media settings does not address the underlying problem at hand. It does not address the reality that perpetrators are treating the people they prey upon, commonly women, with no regard for that person’s humanity or dignity. It does not address the motivations behind why perpetrators commit online abuse. Frankly, efforts should focus on holding perpetrators accountable rather than encouraging people to do this, that or the other to maybe safeguard themselves.

While equipping people with the knowledge to make informed decisions about their digital footprint is important; general advice encouraging people to manage, control or lock down their social media settings in order to protect themselves from forms of online abuse is problematic. And I would urge leaders in the online safety space to reconsider doing that.

 

Featured Image: Photo by William Iven on Unsplash

 

 

 

 

 

Sexual predators edited my photos into porn – how I fought back

TW: Image-based sexual abuse/sextortion

 

I am LITERALLY SHAKING with emotion as I share my TEDxPerth talk about my experiences of image-based abuse.

It’s also bittersweet because to this very day I am still experiencing this horrific crime. Not that long ago an anonymous sexual predator doctored me onto the body of a woman wearing a semi-transparent, nipple-exposing t-shirt with the words ‘I AM A DUMB COW’ written on it, which was shared online. The same sexual predator also doctored another image of me on the cover of another adult movie next to the words ‘TREAT ME LIKE A WHORE’. These were the LEAST sexually explicit of the most recent wave of doctored images of me.

There was a time when I would see these doctored images of me on pornographic sites and uncontrollably cry myself to sleep. But now I am so determined to do what I can to combat image-based abuse so that no other person has to be the subject of this dehumanising and potentially life-ruining criminal behaviour, because this issue is SO much bigger than me or any one person.

It is a global issue. It can and does happen to anyone – particularly women, people with disabilities, the LGBTQI community and other vulnerable groups.

While Australia and many countries around the world have criminalised or are in the process of criminalising image-based abuse (revenge porn), there is only so much one country or state can do to combat an issue that transcends jurisdictions.

The international community (including social media and tech companies) MUST work together to help combat this issue because right now too many victims are left without justice. Technology is advancing faster than our laws, and predators are continuing to come up with new ways to abuse others. We need a global plan of action. And we need it now.

There is so much more I would’ve liked to say in this talk, especially to those who are experiencing image-based abuse. If that is you – I really want you to know that you are not alone. You are loved. You are supported. And the fight for justice is as strong as ever. Yes, things might get really tough. People might victim blame and slut shame you. There might not be any justice or recourse. People might invalidate your experiences because they don’t understand that what happens online has real world consequences.

But PLEASE don’t lose hope or give up. Please know you are not to blame –
It’s YOUR BODY, YOUR CHOICE – ALWAYS. Please stay strong. I know it is easier said than done, but take each day as it comes. Surround yourself with those who love and support you, because things CAN get better. I know it.    (Below I have included the details of the world-first image-based abuse portal created by the Office of the eSafety Commissioner under the amazing leadership of Julie Inman Grant – from personal experience I can tell you that this service is so incredible for those who are looking for support if you are dealing with this, the staff are professional, kind and so caring.)

I also want to take the time to reiterate something I said in the talk – I do NOT in any way want to take credit for ‘changing the law’ AT ALL – In this journey I have had the privilege of meeting fellow survivors and activists who have fought with all their hearts and might for change – this is on their backs. It’s on the backs of ALL the victims and survivors who have dared to speak out.

One warrior in particular is Brieanna Rose who inspires me to my very core. She has been instrumental in this change and she deserves to be recognised. She is an incredible warrior. And it is an honour to know her. I love you Brieanna. I am so grateful for all the work you have done and continue to do for justice.

I have also met some of the incredible academics who have been pivotal in enacting change – Dr. Nicola Henry, Dr. Anastasia Powell and Dr Asher Flynn who have contributed so much in this area – their work and passion is invaluable, and we owe them a great deal of thanks. This is on their backs. They are incredible.

This is also on the backs of women’s rights advocates, tech safety experts, policy advisers, lawyers, politicians and especially the amazing people who helped create such a life changing piece of legislation at the NSW Attorney General’s Department including the NSW Attorney General Mark Speakman who actually included me in this process. Thank you for giving me a voice, thank you for giving me a chance to reclaim my name. I can’t tell how much it has meant to me.

This is also on the backs of so many other stakeholders in Australia and around the world who have worked for years fighting for change and justice in this area. The process of changing the law is not easy, it is a long and convoluted process and I am so grateful to every single person who has played a part in fighting against image-based abuse in Australia and beyond. I am so proud of all your work.

I also want to make it clear, that I could not have gone through this journey without the support of my immediate family (Dad, mum and my 4 sisters – I love you and thank you for putting up with my non-stop crying during the worst of times), my best friends, Liam Downey, Mads Duffield and Tanaya Kar who have supported me from day 1 – I love you and I am forever indebted to you, you were there for me at my worst and I can never repay you. Thank you to ALL my other close friends who have lifted my spirits and given me strength in my darkest days – you know who you are, I love you dearly!

To everyone who has followed this journey and taken the time to reach out – I’ve said this before and I’ll say it again – it does not go unnoticed and I appreciate it more than you know. I want to thank two particular law professors at my uni – Zara J Bending and Shireen Daft who have listened, encouraged, empowered and shown so much love, care and support to me – you have been such pillars of strength for me. Karin Bentley who not only has done so much work fighting for tech safety for women, but has supported and gone out of her way to allow me to share my experiences, bring me to the table, and give me a voice, something that people don’t do often, I am so incredibly grateful to you.

I also want to give A HUGE THANKS to TEDxPerth for allowing me to share my experiences in the first place. Thank you for seeing value in what I had to say and having faith in me. Thank you to all the organisers and curators for VOLUNTARILY doing SO MUCH work putting the event together. TEDxPerth 2017 was a success and it’s thanks to you. To Andrea Gibbs and Emma who were nothing short of phenomenal. They helped me so much with this speech. And were brutally honest with me when it sucked BAD. I am so grateful for your help and support – I really can’t express in words how much your help and support meant to me.  

If you are currently experiencing a form of image-based abuse, please contact police or there is support available through the WORLD-FIRST image-based abuse portal here: https://www.esafety.gov.au/image-based-abuse/

Senate Passes Civil Penalty Regime to Combat Image-Based Abuse

Today the Senate passed the Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Bill 2017 with some surprising, significant amendments. This Bill is part of the Australian Government’s efforts to combat image-based sexual abuse, and was developed from a public consultation into a proposed civil penalty regime (submissions/public workshops) conducted by the Department of Communications and the Arts between May – July 2017.

The Australian Government’s proposed framework is to establish a Commonwealth civil penalty regime to complement:

  • The world-first image-based abuse complaints portal run by the Office of the eSafety Commissioner which provides: information and advice, options for removing and reporting abusive images and videos, and resources and case studies; and
  • Existing Commonwealth and state and territory criminal offences.

The Bill establishes a civil penalty regime that would, as outlined in the explanatory memorandum: “prohibit the non-consensual posting of, or threatening to post, an intimate image on a ‘social media service’, ‘relevant electronic service’, e.g. email and SMS/MMS, or a ‘designated internet service’, e.g. websites and peer to peer file services”, among other things.

It imposes a civil penalty, rather than a criminal liability, of $105,000 for individuals who contravene the prohibition; and a civil penalty of $525,000 for corporations who fail to comply with a ‘removal notice‘ that may require a social media service, relevant electronic service or designated internet service to remove an intimate image from their service.

The Bill also empowers the eSafety Commissioner to investigate complaints, issue formal warnings and infringement notices, provide removal notices and written directions to ensure future contraventions do not occur.

The general consensus from the Senate this week was that Labor, The Australian Greens, and The Nick Xenophon Team welcomed and supported the Turnbull Government’s Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Bill 2017. Although as Labor Senator Deborah O’Neill pointed out the “Turnbull government has been dragging its feet and has taken far too long to address this issue of image based abuse. The bill comes in the fifth year of the Liberal government and over two years after Labor’s first proposes, stronger measures”.

While Labor supported the Bill as a step in the right direction, they did not think it went far enough. Labor called on the government to criminalise the non-consensual sharing of intimate images citing:

  • The COAG Advisory Panel on Reducing Violence against Women and their Children who recommended in April 2016 that strong penalties for the distribution of intimate material without consent be developed to “clarify the serious and criminal nature of the distribution of intimate material without consent”;
  • Concerns by the Commonwealth Director of Public Prosecutions in a Senate Inquiry submission by the Senate Legal and Constitutional Affairs References Committee that “there are limitations on existing Commonwealth laws to adequately deal with ‘revenge porn’ conduct”;
  • Research from RMIT and Monash University that 80% of Australians agree “it should be a crime for someone to share a nude or sexual image of another person without that person’s permission”.

The Australian Government responded to the push to criminalise image-based abuse at the Commonwealth level by pointing out that there is already an existing Commonwealth criminal provision in place under s 474.17 – the misuse of a carriage service in the Commonwealth Criminal Code Act 1995. However, this non-specific, existing provision has been highly and widely criticized for its limited applicability to image-based abuse.

As a result, a significant amendment to the civil penalty regime was successful in the Senate today, namely to amend the Criminal Code Act 1995 to include specific criminal offences that would criminalise sharing and threatening to share, intimate images without consent. While this amendment to introduce criminal offences in conjunction with the proposed civil penalty regime may return to the Senate after transmission through the House, this amendment could mean an incredible move toward justice for victims of image-based abuse. 

In the Senate debate the Australian Greens stated that they were disappointed that the Bill was brought on for debate in such ‘haste‘ without allowing for proper scrutiny (e.g. inquiry). Australian Greens Senator Jordon Steele-John pointed out that “many of those consulted are under the impression that they will subsequently be given the opportunity to give their thoughts, opinions and expertise in regard to the outcome.”

In light of the lack of proper scrutiny of this Bill, another amendment to the Bill (sheet 8364 revised) was agreed to being that of the establishment of an independent review (and written report of the review) of the operation of the civil penalty regime within three years after the commencement of the proposed legislation.

In addition, the Australian Greens expressed concern that the Turnbull Government has forgotten to allocate any funding to the cost of running the scheme. While the explanatory memorandum of the Bill provides a ‘Financial Impact Statement‘ which states that the civil penalty regime “might have a minor impact on Commonwealth expenditure or revenue”, and “any additional funding will be considered in the 2018-19 Budget”, there is a level of uncertainty as to extent of funding needed to carry out this scheme. Labor Senator Louise Pratt also highlighted the “minimal resources that the eSafety Commissioner currently has for undertaking this kind of work”. I anticipate that the question of funding will be discussed in the House.

Also, One Nation Senator Hanson talked about her own experiences where she was subjected to the ‘degrading’ and ’embarrassing’ publication of images of a woman who was partially nude and false claims that they were pictures of her. However, Hanson went on to express some dangerous rhetoric about image-based abuse:

Hanson said:

“As the old saying goes, sometimes it takes two to tango. I say to anyone out there who thinks that intimate images of themselves are okay to send via text message or email: ‘Stop it. Keep it for the bedroom.’ People, regardless of your age, it’s in what is told to you by your parents and how you feel about yourself: people have to take responsibility for their own actions. Young people who get requests for intimate images of themselves early in relationships should not do it. Relationships don’t always last, and the person they are with may very well turn nasty on them. I’m very pleased to say that One Nation are a part of putting a dent in this abhorrent trend of shaming people using online methods and intimate images, but I reiterate: I want every man, woman and young adult to know that they too must play a role in ensuring their private photos are kept private.”

This rhetoric by Hanson perpetuates an insidious culture of victim blaming. It sends a harmful message that victims are partly responsible for the horrific and criminal actions of perpetrators. And may discourage victims from speaking out or seeking help because they feel they are to blame.  Perpetrators who share, threaten to share or record intimate images without consent are the ONLY people responsible for image-based abuse – not the victims. Many people – young people and adults – are capable and do engage in the consensual practice of sharing intimate images in a respectful, healthy, safe, loving or intimate way. But image-based abuse is the clear absence of consent and respect. Image-based abuse is perpetrated for various reasons: to humiliate, shame, intimidate, coerce, control, harass and violate victims, it’s also perpetrated for sexual gratification, social notoriety, and financial gain. Our standards and expectations of behaviour shouldn’t be so low that we hold victims partly responsible for the heinous actions of perpetrators.

When it comes to young people, there is a growing problem of young girls feeling pressure to send intimate images of themselves, and this is something that desperately needs to be addressed with respectful education initiatives and programs. We must teach young people about the safe use of technology and associated risks, consent, respect and we must empower young girls to take control of their online usage and agency – but we mustn’t, in any way, send the message that young people who send intimate images of themselves are somehow responsible for the actions of perpetrators who betray their trust or personal privacy.

To echo the sentiments in the Senate: this Bill is a significant step in the right direction, and when taken in conjunction with the amendment to introduce Commonwealth criminal offences, today marks a significant move toward long-awaited justice for victims.

I am extremely grateful to the Australian Government, Senator the Hon. Mitch Fifield, the Department of Communications and the Arts and all the stakeholders involved in the public consultation of this Bill, as well as everyone who has worked hard for years fighting for justice and accountability. Here’s to hoping for a smooth passage in the House. This is fantastic news!

Proposed Commonwealth Civil Penalties for Image-Based Abuse

The Commonwealth Government has introduced a bill, the Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Bill 2017, aimed at prohibiting and deterring persons and content hosts from sharing intimate images without consent.

Under the proposed law, individuals who post, or threaten to post, an intimate image may be liable to a civil penalty of $105,000.  

Social media providers such as Facebook and Twitter; and other content hosts/electronic services/internet services, may be subject to a ‘removal notice’ to remove an intimate image shared without consent from their service. If they fail to comply with a requirement under a ‘removal notice’ they may be liable to a civil penalty of $525,000.

And as an extra layer of deterrent, persons may be given a direction to cease sharing an intimate image without consent in the future. If persons fail to comply with such a direction, they may also be liable to a civil penalty.

My initial thoughts and concerns of the proposed law:

Overall, I am extremely happy to see such strong and hefty fines for perpetrators and content hosts. I am very pleased that the Commonwealth Government has introduced specific, nation-wide proposed laws for combatting image-based abuse. And I am very grateful to the Department of Communications, Senator Mitch Fifield and everyone involved in drafting this Bill, as well as all the stakeholders who participated in the public consultation process.

My main concern is that there is no express provision creating a statutory right for victims to either claim compensation or damages for the harm this can cause them.

Image-based abuse can cause significant harm to victims including emotional distress, violation, shame, humiliation, damage to their reputation and employability and disruption to their employment or education. Victims can fear for their safety and have suicidal thoughts and/or attempt suicide. I know of victims who have had to take time off work because the emotional distress is so significant. I know victims whose studies have been affected by the actions of perpetrators.

In the area of competition and consumer law, companies who breach such laws can be liable to fines, and for affected consumers the law creates a statutory cause of action for damages for loss or damage. Yet, under this proposed civil penalty regime, while perpetrators and content hosts may be liable to very hefty fines, victims won’t have the express statutory right to access damages (or compensation).

The non-consensual sharing of intimate images is a GLOBAL problem, and it is pervasive in our society, with 1 in 5 Australians having experienced image-based abuse according to RMIT and Monash researchers. And according to research by the eSafety Commission –  ‘women are twice as likely to have their nude/sexual images shared without consent than men’, and ‘women are considerably more likely to report negative personal impacts as a result of image-based abuse’. In August this year image-based abuse became a crime in NSW and since then there have already been 20 charges. So, it is highly likely that should this civil penalty regime be enacted, there will be a lot of fines being handed out. Thus, victims should get the justice they deserve for the harm they suffer.

Also, I am happy to see that the definition of ‘intimate image’ in this Bill has extended beyond just material (photos or videos) depicting or appearing to depict a person’s private parts or a person engaged in a private act – but also specifically includes material of a person without their attire of religious or cultural significance, where that person consistently wears particular attire whenever they are in public. This inclusion of religious or cultural factors in the definition of ‘intimate image’ represents an intersectional and culturally sensitive approach to image-based abuse, which is fantastic!

Having said this, I am proud that Australia is taking such strong action. 

 

 

 

 

 

 

 

 

 

Image-Based Abuse: The Phenomenon of Digitally Manipulated Images

Image-based abuse, colloquially referred to as ‘revenge porn’ (‘revenge porn’ is a misnomer) is an umbrella term. It refers to the non-consensual sharing of intimate images. Contrary to popular belief, there is much more to image-based abuse than the textbook ‘revenge porn’ scenario of the ‘jilted ex-lover sharing nude photos of their ex without consent’. Image-based abuse can be perpetrated in a number of ways, for a number of reasons including (among other things) to control, harass, humiliate, shame, coerce or sexually objectify a victim.

Image-based abuse is the recording, sharing or threatening to record or share, intimate images without consent‘Image’ means photo or video. ‘Intimate image’ means an image of a person engaged in a private act, or of a person’s private parts, or of a person in circumstances one would expect to be afforded privacy. ‘Intimate image’ can also mean an image that has been ‘altered’ without consent (digitally manipulated, doctored, photo shopped, etc.) to show a person in any of the above (i.e. engaged in a private act, etc.)

noelle
Photo: Noelle Martin (Me). Source: ABC NEWS (Dave Martin)

To date there is little to no research, data or information on the phenomenon of digitally manipulated images, but this issue is known to academics, researchers, cyber safety experts and women’s groups, and this issue is being incorporated into some recent law reform initiatives in Australia.

As a survivor-turned-advocate of this particular type of image-based abuse (link to my story here). I hope to provide some much needed insight into this form of image-based abuse and the many ways it can occur in the digital age. I will also provide a few tips on what to do if this happens to you.

The insight I provide below cannot tell you the exact extent nor how frequent this phenomenon is occurring, but what I can tell you is that there are horrific online cultures (websites/threads) that exist which host and facilitate the creation and distribution of digitally manipulated images. I can tell you some of its forms and I can tell you that I’m not the only one. Recent comprehensive research conducted in Australia shows that 1 in 5 Australians experience image-based abuse, while this takes into account other forms of this issue too, the prevalence of image-based abuse in general is telling.

Forms of Digitally Manipulated Image-Based Abuse

  1. ‘Face Swapping’ 

This form is where person A’s face is photo shopped onto pornographic material in such a way to suggest that person A is truly depicted in the pornographic material. For me, this form manifested itself when my face was:

  • photo shopped onto images of naked adult actresses engaged in sexual intercourse;
  • photo shopped on images where I was in highly explicit sexual positions in solo pornographic shots;
  • photo shopped on images where I was being ejaculated on by naked male adult actors;
  • photo shopped on images where I had ejaculation on my face; and
  • photo shopped on the cover of a pornographic DVD.

I must also point out that these altered images of me quite literally identified me by name in the image. My name was edited onto the bottom of these images in fancy font to suggest that I was some adult actress.

2. ‘Transparent Edits’

This form of image-based abuse is where a person’s clothes are digitally manipulated to give the effect of it being see-through. For example, a woman’s blouse can be edited so that the appearance of nipples can be seen through their clothes (this happened to me).

3. ‘Cumonprintedpics’

This form of image-based abuse is where a perpetrator has ejaculated onto an image of person A, and has taken an image of their semen (with/without penis) on person A’s image. The perpetrator can take this second image (containing person A’s image and perpetrators penis/semen) and post it online. There are many forums and websites that feature galleries of this kind of image-based abuse (this happened to me).

4. ‘Bodily Alterations’ 

This form of image-based abuse is where a perpetrator digitally manipulates an image of person A by enlarging or enhancing person A’s private parts, particularly the breasts or behind. The alterations are usually very extreme.

5. ‘Juxtapositions’ 

This form of image-based abuse is where a perpetrator doesn’t necessarily alter an image of person A, but instead juxtaposes (places side-by-side) an image of person A next to say, a pornographic image of person B, where person B has a similar-looking appearance/body to person A. The perpetrator can explicitly or implicitly indicate that the pornographic image of person B, is person A.

6. ‘Unidentifiable Alterations’

This form of image-based abuse is where a perpetrator digitally manipulates an image of person A (into highly sexual material) but person A cannot be (objectively) identified at all. In this grey area, I believe that it really doesn’t matter whether person A can be identifiable by third parties, what matters to me is whether person A can identify themselves, because it is EXTREMELY violating and degrading to be the subject of digital manipulation in itself. Plain and simple.

These are some of the many ways the phenomenon of digital manipulation can occur.

What can you do if this happens to you?

Unfortunately, the laws in Australia are limited. The NSW Parliament has recently passed an image-based abuse bill that will criminalise distributing, recording or threatening to distribute or record intimate images (including ‘altered’ images) without consent. South Australia and Victoria have ‘revenge porn’ laws but neither explicitly mention ‘altered’ images or digitally manipulated images. The Federal Government is in the process of potentially creating a civil penalty regime to complement existing criminal penalties, that could potentially cover digitally manipulated images. And the Office of the eSafety Commissioner is working on an online complaints mechanism for images shared without consent.

In the meantime, there are options. The eSafety Women website provides a list of what you can do. You can:

  • Collect all the evidence
  • Report it to the police
  • If you are over 18, you can report it to ACORN(Australian Cybercrime Online Reporting Network)
  • If you are under 18, you can report it to the Office of the eSafety Commissioner.
  • You can contact the webmasters/content hosts and request the removal of the material. (Proceed with caution)
  • Google has a reporting function to remove intimate images that have been shared without consent. Google can remove such images from its search results.
  • Facebook also has the tools to remove intimate images that have been shared without consent from Facebook, Messenger and Instagram.
  • Contact a lawyer and seek advice.
  • Contact local women’s groups/ domestic violence groups.
  • Sign petitions urging Australia to change the law ASAP.

Just remember, you are NOT alone. Wherever you are in the world. ❤ 

If you or someone you know may be suffering from mental illness, contact SANE, the National Mental Health Charity Helpline on 1800 187 263 or Lifeline, a 24 hour crisis support and suicide prevention service on 13 11 14.