Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.

Aviv Ovadya and the Coming “Infocalypse”

In a far-ranging, frightening, and fascinating interview, Buzzfeed News catches up with engineer and tech prognosticator Aviv Ovadya, who anticipated the current scourge of “fake news” and says we haven’t seen anything yet.


“He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.”
By Charlie Warzel
Buzzfeed
February 11, 2018

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s NewsFeed integrity effort.

“At the time, it felt like we were in a car careening out of control and it wasn’t just that everyone was saying, “we’ll be fine’ — it’s that they didn’t even see the car,” he said.

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later: Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.

But it’s what he sees coming next that will really scare the shit out of you. Read more.

Deepfake: AI-Assisted Porn

Hey! What’s my face doing on a porn star’s body?


Everything You Need To Know About The Face-Swap Technology That’s Sweeping The Internet (And Getting Banned Everywhere)
Digg
February 8, 2018

Gal Gadot’s face on someone else’s body. Image: Screenshot from SendVids

In the past couple of months, “deepfake” has gone from a nonsense word to a widely-used synonym for videos in which one person’s face is digitally grafted onto another person’s body. The most popular “” and troubling “” type of deepfake is artificially produced porn appearing to star famous actresses like Gal Gadot, Daisy Ridley and Scarlett Johansson. Sites like Reddit and Pornhub have made moves to ban pornographic deepfakes in recent days, but it’s never been easier for anyone with an internet connection to make disturbingly real-looking porn by mapping almost anyone’s face over those of porn performers. Here’s what you need to know.

‘Deepfake’ Celebrity Porn First Emerged In December

In an only somewhat hyperbolically titled article called “AI-Assisted Fake Porn Is Here and We’re All Fucked,” Motherboard’s Samantha Cole interviewed the first Redditor to post convincing face-swapped videos, who called himself “deepfakes.” (“Deepfake” which has since become a term used the doctored videos produced by the technology.) “Deepfakes” explained how he created a porn video appearing to star Gal Gadot. Read the rest here.

Can Art Still Shock?

Is Grayson Perry right – can we no longer be outraged by art and literature? From Manet”s Olympia to Pussy Riot and Houellebecq, Adam Thirlwell presents a short history of shock


Can art still shock?
by Adam Thirlwell
The Guardian
23 January 2015

Olympia by à‰douard Manet. Photograph: Corbis
Olympia by à‰douard Manet. Photograph: Corbis

For a long time, I”ve been nostalgic for the era of shock. It”s with a certain fondness that I reflect on the crazed year of 1857, which began with Gustave Flaubert in court for his first novel, Madame Bovary (in the presence of a stenographer, hired by Flaubert, for the benefit of an incredulous posterity), followed, six months later, by Charles Baudelaire, on trial for his first book of poems, Les Fleurs du Mal. On both occasions, the unlucky prosecutor was Ernest Pinard, who lamented “this unhealthy fever which induces writers to portray everything, to describe everything, to say everything”. The era of grand trials! Or if not trials, then scandales: like the first night of Stravinsky”s Rite of Spring in 1913, with its catcalling audience; or Duchamp”s impish Fountain – his notorious urinal, signed by R Mutt, submitted to the exhibition of the Society of Independent Artists in New York in 1917, but rejected by its committee.

I was nostalgic because it seemed to me that shock was no longer possible. Or, perhaps more precisely, shock was no longer admissible. We are all, pronounced Grayson Perry, bohemians now – and therefore unshockable by art. And if this is true, it signals a grand and maybe melancholy shift in the nature of art, and in the relation of art to society. It also appears to me – considering, let”s say, Pussy Riot and Ai Weiwei – a slightly provincial argument. And then came the attack on Charlie Hebdo. Continue reading “Can Art Still Shock?”