Reality: Now Faker Than Ever

In a brilliant and dizzying end-of-year rant, Max Read takes stock of how much of our digital world is constructed from weapons-grade fraud, deception, nonsense, hokum, and miscellaneous bullshit.


“How Much of the Internet is Fake? Turns Out, a Lot of It, Actually”
by Max Read
New York Intelligencer
December 26, 2018

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed may be both at once, or in succession, as you turn it over in your head. Read more.

Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.

Sinclair Broadcasting Screams “Fake News” But They Are Fake News!

Gene Policinski, President & COO of the Newseum Institute, opines on the Sinclair Publishing hostage scenario revealed by Deadspin in a video of news anchors all over the country spouting chillingly identical propaganda.


Policinski: Next time, just put your name to the message
Gene Policinski
Indise the First Amendment
April 7, 2018

Sinclair Broadcasting’s recent promotional message on the state of today’s news — delivered to its TV audiences nationwide — is as protected by the First Amendment as it was an oafish attempt to hide corporate messaging under the veneer of local news reporting.

In other words, it was commentary from a conservative company that has a First Amendment right to express its views, but it was also a shoddy tactic that undermined the very thing Sinclair’s leadership claimed to support: good journalism.

Deadspin — an online sports news site — put together a now widely shared video of news anchors from 45 Sinclair-owned American stations, all reading in synchrony from the same script. The video’s echo-chamber effect laid bare what many have described as an “Orwellian” attempt to deliver a persuasive message using trusted voices in local journalism.

Watch the video:
Sinclair’s Soldiers in Trump’s War on Media Video, by Deadspin

The mash-up of TV anchors, delivering the script with varying degrees of sincerity, prompted dire warnings from left-leaning cable news commentators about media consolidation and ulterior political motives.

President Trump tweeted a defense of Sinclair, using the controversy to take yet another swipe at the same mainstream news outlets he frequently attacks: “So funny to watch Fake News Networks, among the most dishonest groups of people I have ever dealt with, criticize Sinclair Broadcasting for being biased.”

Trump has it wrong — critics took aim at the method, not the message.

Let’s parse the actual effort… Read the rest of this article here.

April Fools’ Day 2018: Stunt Roundup

The smirking array of pranks, stunts, and fake marketing drives has become a predictable April Fool’s Day rite. Our finest brands and capital-C Creative Teams use this opportunity to trot out wacky ideas and to attempt to out-clever each other in a quest for attention.

You can set your sundial by it, but that’s no reason, in itself, to complain. Plenty of brand-based April Fool’s japes are entertaining, and a few pack genuinely subversive elements.

Sunday finds the virtual prank parade already in progress. The clowns have been rolling out all week, in acknowledgement of the holiday schedule, and probably as part of a phenomenon similar to Christmas Creep, in which April Fool’s Day threatens to slowly engulf more and more of the year.

There are few unique challenges against which this year’s festival of cleverness must contend. April Fool’s Day falls on a Sunday, and on the Easter holiday, widely observed in nations where influential marketers and media entities are based. It also falls against a background characterized by extreme distrust and hostility toward advertisers, Silicon Valley tech giants, and a political climate in which the US presidential administration’s most favored PR approach resembles gaslighting. Increasingly, the media treat April Fool’s brand stunts with outward cynicism and exhaustion.

In the wake of the Cambridge Analytica controversy that is grinding away at Facebook, tech brands face a tough room this year. Google, in particular, has always embraced cheeky self-awareness in its pranks, a winking sense of, “everyone seems to think we’re going to control the world someday – and wouldn’t it be kind of neat if we did?” This year’s battery of GOOG yuks, including a “bad joke detector” and an API for different varieties of hummus, acknowledges the inherent absurdity of Google’s algorithmic, data-driven approach to world domination. Google’s work is state-of-the-art in terms of creative skill, but it feels at least few weeks behind the times.

In the Scott Dikkers taxonomy of jokes, irony and parody are hard to make stick in 2018. Gentle absurdity, wordplay, and “madcap” humor may be an easier plan.

Coinciding with Easter Sunday may make it harder to nab eyeballs, but some brands are using it to their advantage. The Chocolate Whopper is one of many gags that draws ridiculous associations with holiday sweets. Following up the success of the emoji car horn, one of the most charming 2017 stunts, Honda returns with another winning exercise in pure silliness. One tech company simply gave a crapload of money to people who need it, which may be the most heartwarming and unorthodox 4/1 tactic on record.

In the non-commercial realm, artists and social critics are addressing the elephant in the room, head on. From anonymous Craigslist pranksters to our own head honcho Joey Skaggs and his annual April Fool’s Day parade, there’s plenty of puckish and ambitious parody directed at Trump and his inherently ridiculous milieu.

Arguably, the best thing that can come from the widespread crisis in confidence that is 2018 is a greater premium on critical thinking and the importance of placing our relentless and exhausting news cycle in its broader context.

As usual, Atlas Obscura does rigorous yet unpretentious work putting curiosities and absurdities against the backdrop of history, in an entertaining and approachable fashion. All week, it has showcased examples of old-school irreverence, from bird dung to a theoretical cactus, as a reminder that high-profile pranks have always been with us, and their spirit is always worth preserving and celebrating. (Thanks to Dr. Bob O’Keefe for the tip on this one.)

Aviv Ovadya and the Coming “Infocalypse”

In a far-ranging, frightening, and fascinating interview, Buzzfeed News catches up with engineer and tech prognosticator Aviv Ovadya, who anticipated the current scourge of “fake news” and says we haven’t seen anything yet.


“He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.”
By Charlie Warzel
Buzzfeed
February 11, 2018

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s NewsFeed integrity effort.

“At the time, it felt like we were in a car careening out of control and it wasn’t just that everyone was saying, “we’ll be fine’ — it’s that they didn’t even see the car,” he said.

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later: Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact.

But it’s what he sees coming next that will really scare the shit out of you. Read more.