Confessions of a Rock and Roll Poser

Last autumn, Jered “Threatin” Eames staged the most alienating, least explicable rock tour stunt since the Sex Pistols hit the deep south. He recently broke his silence.


“The Great Heavy Metal Hoax”
by David Kushner
Rolling Stone
December 14, 2018

In November, managers of rock clubs across the United Kingdom began sharing the same weird tale. A pop-metal performer, Threatin, had rented their clubs for his 10-city European tour. Club owners had never heard of the act when a booking agent approached them promising packed houses. Threatin had fervent followers, effusive likes, rows of adoring comments under his YouTube concert videos, which showed him windmilling before a sea of fans. Websites for the record label, managers and a public-relations company who represented Threatin added to his legitimacy. Threatin’s Facebook page teemed with hundreds of fans who had RSVP’d for his European jaunt, which was supporting his album, Breaking the World.

But despite all the hype, almost no one came to the shows. It was just Threatin and his three-piece band onstage, and his wife, Kelsey, filming him from the empty floor. And yet Threatin didn’t seem to care — he just ripped through a set as if there was a full house. When confronted by confused club owners, Threatin just shrugged, blaming the lack of audience on bad promotion. “It was clear that something weird was happening,” says Jonathan “Minty” Minto, who was bartending the night Threatin played at the Exchange, a Bristol club, “but we didn’t realize how weird.” Intrigued, Minto and his friends started poking around Threatin’s Facebook page, only to find that most of the fans lived in Brazil. “The more we clicked,” says Minto, “the more apparent it became that every single attendee was bogus.”

It all turned out to be fake: The websites, the record label, the PR company, the management company, all traced back to the same GoDaddy account. The throngs of fans in Threatin’s concert videos were stock footage. The promised RSVPs never appeared. When word spread of Threatin’s apparent deception, club owners were perplexed: Why would someone go to such lengths just to play to empty rooms? Read more.

Reality: Now Faker Than Ever

In a brilliant and dizzying end-of-year rant, Max Read takes stock of how much of our digital world is constructed from weapons-grade fraud, deception, nonsense, hokum, and miscellaneous bullshit.


“How Much of the Internet is Fake? Turns Out, a Lot of It, Actually”
by Max Read
New York Intelligencer
December 26, 2018

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed may be both at once, or in succession, as you turn it over in your head. Read more.

Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.

The Best Defense Against a Bad Guy With a Bot

During the 2016 US election cycle, artificial intelligence was wildly successful at spreading lies and propaganda. These researchers suggest weaponizing better bots and aiming them in the opposite direction.


“Bots spread a lot of fakery during the 2016 election. But they can also debunk it.”
by Daniel Funke
November 20, 2018
Poynter

Aside from their role in amplifying the reach of misinformation, bots also play a critical role in getting it off the ground in the first place. According to the study, bots were likely to amplify false tweets right after they were posted, before they went viral. Then users shared them because it looked like a lot of people already had.

“People tend to put greater trust in messages that appear to originate from many people,” said co-author Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida, in the press release. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”

The study suggests Twitter curb the number of automated accounts on social media to cut down on the amplification of misinformation. The company has made some progress toward this end, suspending more than 70 million accounts in May and June alone. More recently, the company took down a bot network that pushed pro-Saudi views about the disappearance of Jamal Khashoggi and started letting users report potential fake accounts.

Nonetheless, bots are still wrecking havoc on Twitter — and some aren’t used for spreading misinformation at all. So what should fact-checkers do to combat their role in spreading misinformation?

Tai Nalon has spent the better part of the past year trying to answer that question — and her answer is to beat the bots at their own game.

“I think artificial intelligence is the only way to tackle misinformation, and we have to build bots to tackle misinformation,” said the director of Aos Fatos, a Brazilian fact-checking project. “(Journalists) have to reach the people where they are reading the news. Now in Brazil, they are reading on social media and on WhatsApp. So why not be there and automate processes using the same tools the bad guys use?” Read more.

Aviv Ovadya and the Coming “Infocalypse”

In a far-ranging, frightening, and fascinating interview, Buzzfeed News catches up with engineer and tech prognosticator Aviv Ovadya, who anticipated the current scourge of “fake news” and says we haven’t seen anything yet.


“He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.”
By Charlie Warzel
Buzzfeed
February 11, 2018

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s NewsFeed integrity effort.

“At the time, it felt like we were in a car careening out of control and it wasn’t just that everyone was saying, “we’ll be fine’ — it’s that they didn’t even see the car,” he said.

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later: Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.

But it’s what he sees coming next that will really scare the shit out of you. Read more.