The new literacy in an AI world

More than five decades ago, Marshall McLuhan argued that media are ecosystems, extensions of human consciousness. The famous adage that the medium is the message also means, as the often-misquoted title of McLuhan’s famous book notes, that the medium is the mass age. We are all immersed in media and technology.

Media have changed a lot since McLuhan wrote: less broadcast, more diffusion and unruliness. But his basic insights remain relevant.

The social media of today are “social” in only a very specific and narrow sense, and their effects on public discourse are mostly deleterious. Twitter, for example, is a force multiplier of disinformation, outright lies and escalating vitriol, especially as wielded by certain holders of high office.

Facebook, meanwhile, is a right-wing corporate entity that nevertheless parades itself as a champion of freedom. Media critic Jacob Silverman, writing this week in the digital magazine The Baffler, notes: “That [Facebook] has come to so thoroughly dominate our public sphere is a tragic indictment of American civic life and American techno-capitalism, which has confused the pitiless surveillance of today’s internet with utopian empowerment.”

Ouch. People may disagree with these judgments, but what stays constant is the need for critical reflection about how media work. As McLuhan himself noted, even the evolving skills of reading and writing are not, to paraphrase Winston Churchill, the end, or even the beginning of the end, of literacy. They are at most the end of the beginning.

Media literacy is therefore more urgent than ever in our day, as is the need for deeper forms of cultural and technological literacy. These are the real font of freedom and democracy, not any cozy relationship between Zuckerbergian bromides and anti-regulatory government feebleness.

I spent last weekend with colleagues at the University of Colorado discussing artificial intelligence and ethics. This may seem a long way from literacy, but as the discussions wound on, I realized that every aspect of our concerns, some fairly technical, were implicated in critical ideas about technology and society. This is the new techno-capital literacy, and we’re all still learning it.

The Colorado project, STEM+C, works to integrate ethical and political concerns into AI and robotics curricula. The key word is “integrate.” So far, my own experience with the ethics of AI has been that computer-science researchers too often view ethical issues as window dressing, hasty add-ons to help secure grant money. But this project is led by computer scientists, and its advisers include a skeptical law professor and a neo-Luddite philosopher (that would be me).

The initiative has unique properties. First, it’s aimed not at high-school or college students but students in Grades 6, 7 and 8. Second, the main delivery vehicle for ethical discussion – the medium bearing the message – is storytelling.

As McLuhan would have confirmed, narrative is essential to understanding human consciousness. It might even be the basis of selfhood, a sense of identity over time. Certainly, stories are everywhere around us, from children’s books to podcasts to the latest (two) winners of the Man Booker Prize. Narrative conveys ideas – and also shapes them.

Two samples will give you an idea of the project. The first is a tale of an AI that functions to match families with pets, like a dating app but with no swipe-left option. The second is a near-future dystopian story about an algorithm-dominated society where programs choose which children will live or die based on environmental degradation and resource depletion. The latter strikes me as a version of Logan’s Run – only it’s kids, not 30-year-olds, who get terminated.

These are clever designs because they aim directly at things humans aged 11 to 14 probably already care about. A lot of great young-adult literature does similar things, although not always with a focus on algorithms and how they might come to influence people and societies. In a way, there is nothing new here. Aesop’s fables do the same thing, as do fairy tales and horror stories. I’m pretty sure I wouldn’t have ended up in philosophy if I hadn’t spent a lot of my adolescent years reading reams of science fiction and weird crossover authors such as Kurt Vonnegut and Philip K. Dick.

But the world changes fast, and the tech speculation of yesterday is the reality of today. It was hot and sunny on the Saturday afternoon when we had this confab in Boulder. We broke off at 3 p.m. to enjoy the weather. Overnight, the temperature plunged, and in the morning, there was snow on the ground. Our plane out of Denver was delayed for several hours.

That’s mountain weather. The tech landscape of today is just as variable. We all need to learn to read the techno-skies. Our predictions will often be wrong. That’s media meteorology, not nearly as accurate as the real kind but just as important to daily life.

MARK KINGWELL –  a professor of philosophy at the University of Toronto.

SPECIAL TO THE GLOBE AND MAIL

PUBLISHED NOVEMBER 1, 2019

#artificalinteligence #media #technology #socialmedia #literacy #computer #facebook #human #people

Posted in

Controllers on Call