Image of a headline written ''deep fake'' and words associated with the theme

Channel 4 News revealed the disturbing trend of deepfake pornography is taking a heavy toll on British celebrities. As per the investigation, over 250 famous personalities from the UK were caught in this cyber web, where their faces were artificially superimposed on explicit content.

The top five deepfake websites were put under the scanner, which revealed that nearly 4,000 renowned figures were targeted, including 255 from Britain. Predominantly, female actors, TV personalities, musicians, and YouTubers were exploited.

The Threat And The Online Safety Act

This gross violation of privacy and personal dignity saw these websites amass an alarming 100 million views in just three months.

Channel 4 News presenter Cathy Newman, herself a victim of this cybercrime, expressed her distress, equating the experience to a violation, especially given the fact that the perpetrators remain hidden while the doctored version of the victims is easily accessible.

In an attempt to combat this rising menace, the UK enforced the Online Safety Act on 31st January, which has deemed the distribution of non-consensual explicit content illegal.

However, a surprising loophole remains that creating such content is not considered illegal. The Act was enacted in response to the rapid rise in deepfake pornography generated through AI and apps.

Deepfake Pornography: A Torrential Increase

Deepfake pornography has seen an exponential rise since the first video surfaced in 2016. In the first three quarters of 2023 alone, a staggering 143,733 new videos were uploaded to the top 40 deepfake pornography websites, thereby surpassing the total content from previous years.

Sophie Parrish, 31, from Merseyside, became a victim of this heinous act before the law was implemented. Parrish described her experience as violent and degrading, sharing her anguish with Channel 4 News and commenting on how it felt like women were being treated as mere objects.

Broadcast Watchdog Ofcom Steps In

The enforcement and application of the Online Safety Act, currently under consultation, will be overseen by broadcasting watchdog Ofcom. An Ofcom representative stressed the disturbing and damaging nature of illegal deepfake material and urged companies to actively prevent and remove such content.

Tech Giants Respond

Both Google and Meta (owner of Facebook and Instagram) have acknowledged the distress caused by such content and have assured continued efforts to enhance existing protections. Google's strategies include removing pages with victims' likenesses and improving ranking factors to address such content broadly.

Ryan Daniels, a representative from Meta, emphasized their strict prohibition policy against explicit content involving minors and AI-generated non-consensual nudity. The tech giant has also taken steps to eliminate associated ads and accounts.

Prospects For This Global Concern

The scope of AI misuse may call for a stronger, more comprehensive response from tech companies, lawmakers, and society at large to not only protect the victims but also prevent such gross violations in the future.