I’ve been trying to be more active on Twitter, in part because a lot of open education people and librarians share experiences and thoughts there. This morning I saw a thread that a librarian I follow had retweeted, originally posted by @Viveka, that got me thinking about something that came across my desk from the ALA’s Center for the Future of Libraries in their newsletter Read for Later a couple of weeks ago. The Twitter thread was about a bot tweet that was making its way around Twitter, trying to drum up outrage about the way Disney has cast the forthcoming live action Little Mermaid movie. I won’t go into detail because I don’t want to grant this message any more attention but the gist is that it’s written to divide people and drum up racial strife while saying on the surface that it’s not about race. Which sounds like something a human might actually do — claim not to be racist but then make a point that attempts to divide people over race. But Viveka notes in the thread, “we know it’s a bot because it behaves unlike any human” and then goes on to explain the tells that make this so.
If you are not as observant as Viveka or just accept the content at face value without interrogating this tweet too carefully — and let’s face it, that’s mostly how the majority of tweets are read, quickly and without much thought — you might miss it. Viveka points out, “It has the right hashtags, the petition link works, call to action is clear.” Which might make it seem real enough that in the seconds it takes to skim it, most people would either ignore it, engage with it, or feel moved enough to click the petition. But it’s been posted “about every ten minutes, as a reply to other tweets mentioning the movie or the actess” (@Viveka).
A bot can do that, a human can’t. Which is why I went back and re-read the New York Times article by Cade Metz & Scott Blumenthal that appeared in Read for Later, “How AI Could Be Weaponized to Spread Disinformation.” Metz and Blumenthal write about two AI companies that are making fake news generators that are getting better and better at mimicking human writing openly available — so that researchers know what we’re up against, as more and more content like the thread above proliferates. Why does it matter what people think about the casting of a Disney film? It doesn’t, but the humans behind this bot creation probably have an interest in dividing Americans over cultural issues. Perhaps so we will vote emotionally, or so we’ll be busy arguing while our government cages children, or tries to start a war somewhere, or . . . you get the idea. And AI makes it more likely that we’ll have trouble identifying what’s bot generated and what’s not.
So why is this a library issue? Libraries of all sorts encourage information literacy, or in the case of school and academic libraries, teach it. Information literacy is a set of skills and habits of mind that allow people to seek, evaluate, and use information effectively and responsibly. Will we be able to keep doing this work if, as Metz and Blumenthal quote OpenAI researcher Alec Radford, “The level of information pollution that could happen with systems like this a few years from now could just get bizarre.”
I’m not sure. Yes, we can keep teaching people to examine and consider information carefully but we have to be careful not to go so far as to convince them to trust nothing, as danah boyd cautions in her article “Did Media Literacy Backfire?” which I re-read every few months to remind myself how hard this work is. Will the media be susceptible to information pollution in the same way social media is? Is it already, in the “balance bias” of its coverage of major issues like climate change?
There are no easy answers. I believe information literacy is a help, but knowing how to do something doesn’t mean someone will do it. Ultimately the fight against information pollution is a matter of will — we have to spend more than a few seconds scanning something online before deciding whether it’s valid or not. I’d like to think libraries have a role in encouraging that, but in the end, it’s probably up to each of us to be smart information consumers.