Reality: Now Faker Than Ever

In a brilliant and dizzying end-of-year rant, Max Read takes stock of how much of our digital world is constructed from weapons-grade fraud, deception, nonsense, hokum, and miscellaneous bullshit.


“How Much of the Internet is Fake? Turns Out, a Lot of It, Actually”
by Max Read
New York Intelligencer
December 26, 2018

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed may be both at once, or in succession, as you turn it over in your head. Read more.

Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.

The Best Defense Against a Bad Guy With a Bot

During the 2016 US election cycle, artificial intelligence was wildly successful at spreading lies and propaganda. These researchers suggest weaponizing better bots and aiming them in the opposite direction.


“Bots spread a lot of fakery during the 2016 election. But they can also debunk it.”
by Daniel Funke
November 20, 2018
Poynter

Aside from their role in amplifying the reach of misinformation, bots also play a critical role in getting it off the ground in the first place. According to the study, bots were likely to amplify false tweets right after they were posted, before they went viral. Then users shared them because it looked like a lot of people already had.

“People tend to put greater trust in messages that appear to originate from many people,” said co-author Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida, in the press release. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”

The study suggests Twitter curb the number of automated accounts on social media to cut down on the amplification of misinformation. The company has made some progress toward this end, suspending more than 70 million accounts in May and June alone. More recently, the company took down a bot network that pushed pro-Saudi views about the disappearance of Jamal Khashoggi and started letting users report potential fake accounts.

Nonetheless, bots are still wrecking havoc on Twitter — and some aren’t used for spreading misinformation at all. So what should fact-checkers do to combat their role in spreading misinformation?

Tai Nalon has spent the better part of the past year trying to answer that question — and her answer is to beat the bots at their own game.

“I think artificial intelligence is the only way to tackle misinformation, and we have to build bots to tackle misinformation,” said the director of Aos Fatos, a Brazilian fact-checking project. “(Journalists) have to reach the people where they are reading the news. Now in Brazil, they are reading on social media and on WhatsApp. So why not be there and automate processes using the same tools the bad guys use?” Read more.

Barney Rosset Documentary Seeks Support

Recently, a team of seasoned and passionate documentary filmmakers launched a Kickstarter project to fund Barney’s Wall, a tribute to the iconoclastic Evergreen Review publisher, First Amendment crusader, and countercultural titan Barney Rosset.

Now, they need a bit more help to cover permissions, attorney fees, and other expenses associating with bringing such a project to fruit. (We can certainly sympathize.)

If you’d like to donate, you can do so here before January 4th, 2019.

And if you aren’t familiar with Rosset, check out his obituary. He’s an essential figure in the development of 20th Century creative rebellion, and it’s a rousing read in its own right.

“Colleagues said he had ‘a whim of steel’. ‘He does everything by impulse and then figures out afterward whether he’s made a smart move or was just kidding.'”

Academic Journalism?

Three academic scholars prove once again that you can’t trust academic journalism, especially when it comes to “grievance studies”. From Vinay Menon in The Star: “They are self-described liberals. They are merely exposing what many others have claimed in recent years, namely that radicals are polluting certain disciplines from the inside. These “social justice warriors,” the argument goes, are sacrificing objective truth for social constructivism. They are blowing up enlightenment values and the scientific method to advance agendas in the culture wars.”

h/t Peter, Linda, Susanne


Universities get schooled on ‘breastaurants’ and ‘fat bodybuilding’
by Vinay Menon
The Star
October 5, 2018

Oh, the humanities.

Fake news grabbed academia by the tweedy lapels this week, after three scholars confessed to a brazen hoax. Over the last year, Helen Pluckrose, Peter Boghossian and James A. Lindsay wrote bogus papers, which they submitted to peer-reviewed journals in various fields they now lump together as “grievance studies.”

James Lindsay, Helen Pluckrose and Peter Boghossian (Mike Nayna)

In one “study,” published in a journal of “feminist geography,” they analyzed “rape culture” in three Portland dog parks: “How do human companions manage, contribute, and respond to violence in dogs?”

In another, using a contrived thesis inspired by Frankenstein and Lacanian psychoanalysis, they argued artificial intelligence is a threat to humanity due to the underlying “masculinist and imperialist” programming.

They advocated for introducing a new category — “fat bodybuilding” — to the muscle-biased sport. They called for “queer astrology” to be included in astronomy. They offered a “feminist rewrite” of a chapter from Hitler’s Mein Kampf. They searched for postmodern answers to ridiculous queries such as: why do straight men enjoy eating at “breastaurants” such as Hooters? Continue reading “Academic Journalism?”