r/Place: Recollections of a Pop-up Online Subculture

r/Place, an incredible 2017 reddit experiment with a simple premise and strict parameters, stands out for the spirit of challenge and community it ignited. It brought the best of collaborative street art into the heart of the digital realm, it earned its place in the annals of internet culture, and it’s worth revisiting and remembering. Here’s how it went down, through the eyes of one very engaged participant.

(If you’re unfamiliar with reddit, here’s a pretty good primer.)


“The story of r/Place. As told by a foot soldier for r/Mexico.”
By Arturo Gutierrez
ART + Marketing
April 3, 2017

I’m sure other historians can tell you who was the first. Others much more knowledgeable than me who can pinpoint where exactly in the vast Canvas did the cursors of hundreds aimed themselves into a singular area, and willed order out of the chaos. But I’m not the one to tell.

Instead, what I saw as a bystander that April 1st was the emergence of life, color, and memes of all sizes and kinds growing almost by magic. And as the hours passed, as I laid a pixel here, waited, and laid another pixel there, the whole Canvas evolved and grew between each of my visits. It was an amazing sight to behold. An inspiring feat of human ingenuity, humor, and improvised politics in slow motion.

Yes, that’s right. For even in these early hours, even before the dedicated subreddits, the forums, Discord channels and massive bot armies of the later days, a silent, wordless body of politics was being established right before our eyes. Read more.

In Search of Ethical Artificial Intelligence

In a noble effort to assure the ethical use of AI in legal matters, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe is catching up with Joey Skaggs’ visionary 1995 Solomon Project hoax. h/t Miso.


“Council of Europe adopts first European Ethical Charter on the use of artificial intelligence in judicial systems”
by Newsroom staff
Council of Europe
April 12, 2018

The European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European text setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems.

The Charter provides a framework of principles that can guide policy makers, legislators and justice professionals when they grapple with the rapid development of AI in national judicial processes.

The CEPEJ’s view as set out in the Charter is that the application of AI in the field of justice can contribute to improve the efficiency and quality and must be implemented in a responsible manner which complies with the fundamental rights guaranteed in particular in the European Convention on Human Rights (ECHR) and the Council of Europe Convention on the Protection of Personal Data. For the CEPEJ, it is essential to ensure that AI remains a tool in the service of the general interest and that its use respects individual rights.

The CEPEJ has identified the following core principles to be respected in the field of AI and justice:

  • Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights;
  • Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals;
  • Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment;
  • Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits;
  • Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.

For the CEPEJ, compliance with these principles must be ensured in the processing of judicial decisions and data by algorithms and in the use made of them. Read more.

Reality: Now Faker Than Ever

In a brilliant and dizzying end-of-year rant, Max Read takes stock of how much of our digital world is constructed from weapons-grade fraud, deception, nonsense, hokum, and miscellaneous bullshit.


“How Much of the Internet is Fake? Turns Out, a Lot of It, Actually”
by Max Read
New York Intelligencer
December 26, 2018

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

In the future, when I look back from the high-tech gamer jail in which President PewDiePie will have imprisoned me, I will remember 2018 as the year the internet passed the Inversion, not in some strict numerical sense, since bots already outnumber humans online more years than not, but in the perceptual sense. The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes, but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real. The “fakeness” of the post-Inversion internet is less a calculable falsehood and more a particular quality of experience — the uncanny sense that what you encounter online is not “real” but is also undeniably not “fake,” and indeed may be both at once, or in succession, as you turn it over in your head. Read more.

Another Reason Art Is Bad for Fascism

Ever wondered why fascists hate free speech? Brains trumped brawn when this German art collective shined a light on dozens of violent neo-Nazis.


“Who Says Art Is Useless? A German Art Collective Outs 25 Neo-Nazis in an Online Sting Operation”
by Henri Neuendorf
Artnet News
December 8, 2018

(Photo by Sean Gallup/Getty Images)

A left-wing German art collective is using its creativity for a cause. The group’s members announced on Wednesday that they had identified dozens of neo-Nazis by luring them into an elaborate digital trap.

In August, far-right groups gathered in the east German city of Chemnitz for a multi-day rally that quickly turned violent. Fascist extremists chased and harassed immigrants, vandalized property, made Nazi salutes (which is illegal in Germany), and clashed with riot police. But most of the demonstrators who caused the unrest managed to evade arrest and prosecution.

In response, the leftist artist and activist group Center for Political Beauty (ZPS) made it their mission to bring as many neo-Nazi rioters to justice as possible. After the unrest, the activists began collecting footage and images of rioters and cross-referenced it with publicly available social media profiles.

The group built a website with information and pictures of more than 1,500 of the estimated 7,000 Chemnitz demonstrators and sent out a newsletter urging the public to come forward with further information. But the public appeal turned out to be a trick. Programmers working with ZPS deliberately designed the site so visitors could only see 20 profiles at a time, encouraging the fascists to use the search function to find out if they themselves had been named. Read more.

Deep Fakes: Down the Horrifying Rabbit Hole

On the topic of our tenuous collective relationship with the concept formerly known as “truth,” this examination of “deep fakes,” high-tech simulated video recordings of people you recognize doing things they’ve never actually done, may be the most frightening and portentous emerging story of 2018. And that’s saying a mouthful.


“You thought fake news was bad? Deep fakes are where truth goes to die”
by Oscar Schwartz
November 12, 2018
The Guardian

Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.

The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.

A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake. Read more.