Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.

The Best Defense Against a Bad Guy With a Bot

During the 2016 US election cycle, artificial intelligence was wildly successful at spreading lies and propaganda. These researchers suggest weaponizing better bots and aiming them in the opposite direction.


“Bots spread a lot of fakery during the 2016 election. But they can also debunk it.”
by Daniel Funke
November 20, 2018
Poynter

Aside from their role in amplifying the reach of misinformation, bots also play a critical role in getting it off the ground in the first place. According to the study, bots were likely to amplify false tweets right after they were posted, before they went viral. Then users shared them because it looked like a lot of people already had.

“People tend to put greater trust in messages that appear to originate from many people,” said co-author Giovanni Luca Ciampaglia, an assistant professor of computer science at the University of South Florida, in the press release. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”

The study suggests Twitter curb the number of automated accounts on social media to cut down on the amplification of misinformation. The company has made some progress toward this end, suspending more than 70 million accounts in May and June alone. More recently, the company took down a bot network that pushed pro-Saudi views about the disappearance of Jamal Khashoggi and started letting users report potential fake accounts.

Nonetheless, bots are still wrecking havoc on Twitter — and some aren’t used for spreading misinformation at all. So what should fact-checkers do to combat their role in spreading misinformation?

Tai Nalon has spent the better part of the past year trying to answer that question — and her answer is to beat the bots at their own game.

“I think artificial intelligence is the only way to tackle misinformation, and we have to build bots to tackle misinformation,” said the director of Aos Fatos, a Brazilian fact-checking project. “(Journalists) have to reach the people where they are reading the news. Now in Brazil, they are reading on social media and on WhatsApp. So why not be there and automate processes using the same tools the bad guys use?” Read more.

Shooting Fish in a Barrel for Profit

This pathetic story oozes with irony.
h/t Felipe & Eli


‘Nothing on this page is real’: How lies become truth in online America
by Eli Saslow
Washington Post
November 17, 2018

Christopher Blair, 46, sits at his desk at home in Maine and checks his Facebook page, America’s Last Line of Defense. He launched the political-satire portal with other liberal bloggers during the 2016 presidential campaign. (Jabin Botsford/The Washington Post)

NORTH WATERBORO, Maine — The only light in the house came from the glow of three computer monitors, and Christopher Blair, 46, sat down at a keyboard and started to type. His wife had left for work and his children were on their way to school, but waiting online was his other community, an unreality where nothing was exactly as it seemed. He logged onto his website and began to invent his first news story of the day.

“BREAKING,” he wrote, pecking out each letter with his index fingers as he considered the possibilities. Maybe he would announce that Hillary Clinton had died during a secret overseas mission to smuggle more refugees into America. Maybe he would award President Trump the Nobel Peace Prize for his courage in denying climate change.

A new message popped onto Blair’s screen from a friend who helped with his website. “What viral insanity should we spread this morning?” the friend asked. Continue reading “Shooting Fish in a Barrel for Profit”

Academic Journalism?

Three academic scholars prove once again that you can’t trust academic journalism, especially when it comes to “grievance studies”. From Vinay Menon in The Star: “They are self-described liberals. They are merely exposing what many others have claimed in recent years, namely that radicals are polluting certain disciplines from the inside. These “social justice warriors,” the argument goes, are sacrificing objective truth for social constructivism. They are blowing up enlightenment values and the scientific method to advance agendas in the culture wars.”

h/t Peter, Linda, Susanne


Universities get schooled on ‘breastaurants’ and ‘fat bodybuilding’
by Vinay Menon
The Star
October 5, 2018

Oh, the humanities.

Fake news grabbed academia by the tweedy lapels this week, after three scholars confessed to a brazen hoax. Over the last year, Helen Pluckrose, Peter Boghossian and James A. Lindsay wrote bogus papers, which they submitted to peer-reviewed journals in various fields they now lump together as “grievance studies.”

James Lindsay, Helen Pluckrose and Peter Boghossian (Mike Nayna)

In one “study,” published in a journal of “feminist geography,” they analyzed “rape culture” in three Portland dog parks: “How do human companions manage, contribute, and respond to violence in dogs?”

In another, using a contrived thesis inspired by Frankenstein and Lacanian psychoanalysis, they argued artificial intelligence is a threat to humanity due to the underlying “masculinist and imperialist” programming.

They advocated for introducing a new category — “fat bodybuilding” — to the muscle-biased sport. They called for “queer astrology” to be included in astronomy. They offered a “feminist rewrite” of a chapter from Hitler’s Mein Kampf. They searched for postmodern answers to ridiculous queries such as: why do straight men enjoy eating at “breastaurants” such as Hooters? Continue reading “Academic Journalism?”

Google Maps, the Fraud Frontier

It’s the wild, wild west. Why has Google Maps, “plagued by fake reviews, ghost listings, lead generation schemes and impersonators,” barely begun to fight back?


These online volunteers fight fake reviews, ghost listings and other scams on Google Maps — and say the problem’s getting worse
by Jillian D’Onfro
CNBC
April 13, 2018

Tom Waddington was hanging out at a friend’s house when he got an unexpected notification from Google Maps.

Waddington is part of a group of Google Maps advocates who are trying to improve the service, so he lets Google track his location and frequently adds photos or edits to Maps listings.

So the notification itself was routine, but the message was strange: Maps wanted him to contribute information about the Urgent Care center nearby. He was in a residential neighborhood.

He opened the app and, sure enough, one of the houses next door was listed as a clinic. A telemedicine company that also made house calls had falsely claimed that physical address to try to increase business. The scammers hoped potential patients would search Maps for Urgent Care centers nearby, then call its number to schedule a house call or virtual appointment.

These growth-hacking scams can have consequences: Waddington found someone who claimed to have taken his child to one of these non-existent clinics. Read the rest here.